00:00:00.001 Started by upstream project "autotest-per-patch" build number 132731 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.116 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.229 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.265 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.253 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.266 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.279 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.279 > git config core.sparsecheckout # timeout=10 00:00:07.292 > git read-tree -mu HEAD # timeout=10 00:00:07.314 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.348 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.348 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.433 [Pipeline] Start of Pipeline 00:00:07.447 [Pipeline] library 00:00:07.448 Loading library shm_lib@master 00:00:07.448 Library shm_lib@master is cached. Copying from home. 00:00:07.467 [Pipeline] node 00:00:07.478 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.479 [Pipeline] { 00:00:07.489 [Pipeline] catchError 00:00:07.490 [Pipeline] { 00:00:07.503 [Pipeline] wrap 00:00:07.510 [Pipeline] { 00:00:07.519 [Pipeline] stage 00:00:07.520 [Pipeline] { (Prologue) 00:00:07.722 [Pipeline] sh 00:00:08.008 + logger -p user.info -t JENKINS-CI 00:00:08.025 [Pipeline] echo 00:00:08.026 Node: WFP6 00:00:08.033 [Pipeline] sh 00:00:08.348 [Pipeline] setCustomBuildProperty 00:00:08.360 [Pipeline] echo 00:00:08.361 Cleanup processes 00:00:08.366 [Pipeline] sh 00:00:08.649 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.649 2732402 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.662 [Pipeline] sh 00:00:08.947 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.947 ++ grep -v 'sudo pgrep' 00:00:08.947 ++ awk '{print $1}' 00:00:08.947 + sudo kill -9 00:00:08.947 + true 00:00:08.963 [Pipeline] cleanWs 00:00:08.974 [WS-CLEANUP] Deleting project workspace... 00:00:08.974 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.981 [WS-CLEANUP] done 00:00:08.985 [Pipeline] setCustomBuildProperty 00:00:09.000 [Pipeline] sh 00:00:09.293 + sudo git config --global --replace-all safe.directory '*' 00:00:09.401 [Pipeline] httpRequest 00:00:10.172 [Pipeline] echo 00:00:10.174 Sorcerer 10.211.164.101 is alive 00:00:10.186 [Pipeline] retry 00:00:10.189 [Pipeline] { 00:00:10.205 [Pipeline] httpRequest 00:00:10.209 HttpMethod: GET 00:00:10.210 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.210 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.218 Response Code: HTTP/1.1 200 OK 00:00:10.218 Success: Status code 200 is in the accepted range: 200,404 00:00:10.218 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.818 [Pipeline] } 00:00:32.839 [Pipeline] // retry 00:00:32.846 [Pipeline] sh 00:00:33.128 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.145 [Pipeline] httpRequest 00:00:33.864 [Pipeline] echo 00:00:33.866 Sorcerer 10.211.164.101 is alive 00:00:33.878 [Pipeline] retry 00:00:33.881 [Pipeline] { 00:00:33.897 [Pipeline] httpRequest 00:00:33.902 HttpMethod: GET 00:00:33.902 URL: http://10.211.164.101/packages/spdk_562857cff21b5c883ba12fb3bf8b656f974ee75b.tar.gz 00:00:33.903 Sending request to url: http://10.211.164.101/packages/spdk_562857cff21b5c883ba12fb3bf8b656f974ee75b.tar.gz 00:00:33.928 Response Code: HTTP/1.1 200 OK 00:00:33.932 Success: Status code 200 is in the accepted range: 200,404 00:00:33.938 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_562857cff21b5c883ba12fb3bf8b656f974ee75b.tar.gz 00:02:16.178 [Pipeline] } 00:02:16.192 [Pipeline] // retry 00:02:16.198 [Pipeline] sh 00:02:16.479 + tar --no-same-owner -xf spdk_562857cff21b5c883ba12fb3bf8b656f974ee75b.tar.gz 00:02:19.022 [Pipeline] sh 00:02:19.305 + git -C spdk log --oneline -n5 00:02:19.305 562857cff lib/mlx5: API to configure UMR 00:02:19.305 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:19.305 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:19.305 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:19.305 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:19.315 [Pipeline] } 00:02:19.326 [Pipeline] // stage 00:02:19.332 [Pipeline] stage 00:02:19.333 [Pipeline] { (Prepare) 00:02:19.346 [Pipeline] writeFile 00:02:19.361 [Pipeline] sh 00:02:19.644 + logger -p user.info -t JENKINS-CI 00:02:19.656 [Pipeline] sh 00:02:19.940 + logger -p user.info -t JENKINS-CI 00:02:19.952 [Pipeline] sh 00:02:20.235 + cat autorun-spdk.conf 00:02:20.235 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.235 SPDK_TEST_NVMF=1 00:02:20.235 SPDK_TEST_NVME_CLI=1 00:02:20.235 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.235 SPDK_TEST_NVMF_NICS=e810 00:02:20.235 SPDK_TEST_VFIOUSER=1 00:02:20.235 SPDK_RUN_UBSAN=1 00:02:20.235 NET_TYPE=phy 00:02:20.242 RUN_NIGHTLY=0 00:02:20.246 [Pipeline] readFile 00:02:20.268 [Pipeline] withEnv 00:02:20.270 [Pipeline] { 00:02:20.282 [Pipeline] sh 00:02:20.566 + set -ex 00:02:20.566 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:20.566 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:20.566 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.566 ++ SPDK_TEST_NVMF=1 00:02:20.566 ++ SPDK_TEST_NVME_CLI=1 00:02:20.566 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.566 ++ SPDK_TEST_NVMF_NICS=e810 00:02:20.566 ++ SPDK_TEST_VFIOUSER=1 00:02:20.566 ++ SPDK_RUN_UBSAN=1 00:02:20.566 ++ NET_TYPE=phy 00:02:20.566 ++ RUN_NIGHTLY=0 00:02:20.566 + case $SPDK_TEST_NVMF_NICS in 00:02:20.566 + DRIVERS=ice 00:02:20.566 + [[ tcp == \r\d\m\a ]] 00:02:20.566 + [[ -n ice ]] 00:02:20.566 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:20.566 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:20.566 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:20.566 rmmod: ERROR: Module irdma is not currently loaded 00:02:20.566 rmmod: ERROR: Module i40iw is not currently loaded 00:02:20.566 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:20.566 + true 00:02:20.566 + for D in $DRIVERS 00:02:20.566 + sudo modprobe ice 00:02:20.566 + exit 0 00:02:20.575 [Pipeline] } 00:02:20.592 [Pipeline] // withEnv 00:02:20.597 [Pipeline] } 00:02:20.613 [Pipeline] // stage 00:02:20.622 [Pipeline] catchError 00:02:20.624 [Pipeline] { 00:02:20.637 [Pipeline] timeout 00:02:20.637 Timeout set to expire in 1 hr 0 min 00:02:20.639 [Pipeline] { 00:02:20.652 [Pipeline] stage 00:02:20.654 [Pipeline] { (Tests) 00:02:20.668 [Pipeline] sh 00:02:20.955 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:20.955 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:20.955 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:20.955 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:20.955 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.955 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:20.955 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:20.955 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:20.955 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:20.955 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:20.955 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:20.955 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:20.955 + source /etc/os-release 00:02:20.955 ++ NAME='Fedora Linux' 00:02:20.955 ++ VERSION='39 (Cloud Edition)' 00:02:20.955 ++ ID=fedora 00:02:20.955 ++ VERSION_ID=39 00:02:20.955 ++ VERSION_CODENAME= 00:02:20.955 ++ PLATFORM_ID=platform:f39 00:02:20.955 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:20.955 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:20.955 ++ LOGO=fedora-logo-icon 00:02:20.955 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:20.955 ++ HOME_URL=https://fedoraproject.org/ 00:02:20.955 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:20.955 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:20.955 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:20.955 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:20.955 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:20.955 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:20.955 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:20.955 ++ SUPPORT_END=2024-11-12 00:02:20.955 ++ VARIANT='Cloud Edition' 00:02:20.955 ++ VARIANT_ID=cloud 00:02:20.955 + uname -a 00:02:20.955 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:20.955 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:23.492 Hugepages 00:02:23.492 node hugesize free / total 00:02:23.492 node0 1048576kB 0 / 0 00:02:23.492 node0 2048kB 0 / 0 00:02:23.492 node1 1048576kB 0 / 0 00:02:23.492 node1 2048kB 0 / 0 00:02:23.492 00:02:23.492 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:23.492 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:23.492 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:23.492 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:23.492 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:23.492 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:23.492 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:23.492 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:23.492 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:23.492 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:23.492 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:23.492 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:23.492 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:23.492 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:23.492 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:23.492 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:23.492 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:23.492 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:23.492 + rm -f /tmp/spdk-ld-path 00:02:23.492 + source autorun-spdk.conf 00:02:23.492 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.492 ++ SPDK_TEST_NVMF=1 00:02:23.492 ++ SPDK_TEST_NVME_CLI=1 00:02:23.492 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.492 ++ SPDK_TEST_NVMF_NICS=e810 00:02:23.492 ++ SPDK_TEST_VFIOUSER=1 00:02:23.492 ++ SPDK_RUN_UBSAN=1 00:02:23.492 ++ NET_TYPE=phy 00:02:23.492 ++ RUN_NIGHTLY=0 00:02:23.493 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:23.493 + [[ -n '' ]] 00:02:23.493 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.493 + for M in /var/spdk/build-*-manifest.txt 00:02:23.493 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:23.493 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:23.493 + for M in /var/spdk/build-*-manifest.txt 00:02:23.493 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:23.493 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:23.493 + for M in /var/spdk/build-*-manifest.txt 00:02:23.493 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:23.493 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:23.493 ++ uname 00:02:23.493 + [[ Linux == \L\i\n\u\x ]] 00:02:23.493 + sudo dmesg -T 00:02:23.753 + sudo dmesg --clear 00:02:23.753 + dmesg_pid=2733864 00:02:23.753 + [[ Fedora Linux == FreeBSD ]] 00:02:23.753 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.753 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.753 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:23.753 + [[ -x /usr/src/fio-static/fio ]] 00:02:23.753 + export FIO_BIN=/usr/src/fio-static/fio 00:02:23.753 + FIO_BIN=/usr/src/fio-static/fio 00:02:23.753 + sudo dmesg -Tw 00:02:23.753 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:23.753 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:23.753 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:23.753 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.753 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.753 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:23.753 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.753 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.753 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:23.753 15:19:29 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:23.753 15:19:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:23.753 15:19:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:23.753 15:19:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:23.753 15:19:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:23.753 15:19:29 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:23.753 15:19:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:23.753 15:19:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:23.753 15:19:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.753 15:19:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.753 15:19:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.753 15:19:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.753 15:19:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.753 15:19:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.753 15:19:29 -- paths/export.sh@5 -- $ export PATH 00:02:23.753 15:19:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.753 15:19:29 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:23.753 15:19:29 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:23.753 15:19:29 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733494769.XXXXXX 00:02:23.753 15:19:29 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733494769.hwFm2R 00:02:23.753 15:19:29 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:23.753 15:19:29 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:23.753 15:19:29 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:23.753 15:19:29 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:23.753 15:19:29 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.753 15:19:29 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:23.753 15:19:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:23.753 15:19:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.753 15:19:29 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:23.753 15:19:29 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:23.753 15:19:29 -- pm/common@17 -- $ local monitor 00:02:23.753 15:19:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.753 15:19:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.753 15:19:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.753 15:19:29 -- pm/common@21 -- $ date +%s 00:02:23.753 15:19:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.753 15:19:29 -- pm/common@21 -- $ date +%s 00:02:23.753 15:19:29 -- pm/common@25 -- $ sleep 1 00:02:23.753 15:19:29 -- pm/common@21 -- $ date +%s 00:02:23.753 15:19:29 -- pm/common@21 -- $ date +%s 00:02:23.753 15:19:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733494769 00:02:23.753 15:19:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733494769 00:02:23.753 15:19:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733494769 00:02:23.753 15:19:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733494769 00:02:24.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733494769_collect-cpu-load.pm.log 00:02:24.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733494769_collect-vmstat.pm.log 00:02:24.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733494769_collect-cpu-temp.pm.log 00:02:24.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733494769_collect-bmc-pm.bmc.pm.log 00:02:24.950 15:19:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:24.950 15:19:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.950 15:19:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.950 15:19:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.950 15:19:30 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.950 Fri Dec 6 02:19:30 PM UTC 2024 00:02:24.950 15:19:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.950 v25.01-pre-304-g562857cff 00:02:24.950 15:19:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:24.950 15:19:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.950 15:19:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.950 15:19:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:24.950 15:19:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:24.950 15:19:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.950 ************************************ 00:02:24.950 START TEST ubsan 00:02:24.950 ************************************ 00:02:24.950 15:19:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:24.950 using ubsan 00:02:24.950 00:02:24.950 real 0m0.000s 00:02:24.950 user 0m0.000s 00:02:24.950 sys 0m0.000s 00:02:24.950 15:19:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:24.950 15:19:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.950 ************************************ 00:02:24.950 END TEST ubsan 00:02:24.950 ************************************ 00:02:24.950 15:19:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:24.950 15:19:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:24.950 15:19:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:24.950 15:19:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:24.950 15:19:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:24.950 15:19:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:24.950 15:19:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:24.950 15:19:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:24.950 15:19:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:25.209 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:25.209 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:25.468 Using 'verbs' RDMA provider 00:02:38.249 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:50.464 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:50.464 Creating mk/config.mk...done. 00:02:50.464 Creating mk/cc.flags.mk...done. 00:02:50.464 Type 'make' to build. 00:02:50.464 15:19:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:50.464 15:19:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:50.464 15:19:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:50.464 15:19:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.464 ************************************ 00:02:50.464 START TEST make 00:02:50.464 ************************************ 00:02:50.464 15:19:56 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:50.722 make[1]: Nothing to be done for 'all'. 00:02:52.109 The Meson build system 00:02:52.109 Version: 1.5.0 00:02:52.109 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:52.109 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:52.109 Build type: native build 00:02:52.109 Project name: libvfio-user 00:02:52.109 Project version: 0.0.1 00:02:52.109 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.109 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.109 Host machine cpu family: x86_64 00:02:52.109 Host machine cpu: x86_64 00:02:52.109 Run-time dependency threads found: YES 00:02:52.109 Library dl found: YES 00:02:52.109 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.109 Run-time dependency json-c found: YES 0.17 00:02:52.109 Run-time dependency cmocka found: YES 1.1.7 00:02:52.109 Program pytest-3 found: NO 00:02:52.109 Program flake8 found: NO 00:02:52.109 Program misspell-fixer found: NO 00:02:52.109 Program restructuredtext-lint found: NO 00:02:52.109 Program valgrind found: YES (/usr/bin/valgrind) 00:02:52.109 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.109 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.109 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.109 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:52.109 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:52.109 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:52.109 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:52.109 Build targets in project: 8 00:02:52.109 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:52.109 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:52.109 00:02:52.109 libvfio-user 0.0.1 00:02:52.109 00:02:52.109 User defined options 00:02:52.109 buildtype : debug 00:02:52.109 default_library: shared 00:02:52.109 libdir : /usr/local/lib 00:02:52.109 00:02:52.110 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.677 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:52.677 [1/37] Compiling C object samples/null.p/null.c.o 00:02:52.677 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:52.677 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:52.677 [4/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:52.677 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:52.677 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:52.677 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:52.677 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:52.677 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:52.677 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:52.935 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:52.935 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:52.935 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:52.935 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:52.935 [15/37] Compiling C object samples/server.p/server.c.o 00:02:52.935 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:52.935 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:52.935 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:52.935 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:52.935 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:52.935 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:52.935 [22/37] Compiling C object samples/client.p/client.c.o 00:02:52.935 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:52.935 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:52.935 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:52.935 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:52.935 [27/37] Linking target samples/client 00:02:52.935 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:52.935 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:52.935 [30/37] Linking target test/unit_tests 00:02:52.935 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:53.194 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:53.194 [33/37] Linking target samples/null 00:02:53.194 [34/37] Linking target samples/lspci 00:02:53.194 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:53.194 [36/37] Linking target samples/gpio-pci-idio-16 00:02:53.194 [37/37] Linking target samples/server 00:02:53.194 INFO: autodetecting backend as ninja 00:02:53.194 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:53.194 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:53.760 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:53.760 ninja: no work to do. 00:02:59.027 The Meson build system 00:02:59.027 Version: 1.5.0 00:02:59.027 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:59.027 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:59.027 Build type: native build 00:02:59.027 Program cat found: YES (/usr/bin/cat) 00:02:59.027 Project name: DPDK 00:02:59.027 Project version: 24.03.0 00:02:59.027 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.027 C linker for the host machine: cc ld.bfd 2.40-14 00:02:59.027 Host machine cpu family: x86_64 00:02:59.027 Host machine cpu: x86_64 00:02:59.027 Message: ## Building in Developer Mode ## 00:02:59.027 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.027 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:59.027 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.027 Program python3 found: YES (/usr/bin/python3) 00:02:59.027 Program cat found: YES (/usr/bin/cat) 00:02:59.027 Compiler for C supports arguments -march=native: YES 00:02:59.027 Checking for size of "void *" : 8 00:02:59.027 Checking for size of "void *" : 8 (cached) 00:02:59.027 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:59.027 Library m found: YES 00:02:59.027 Library numa found: YES 00:02:59.027 Has header "numaif.h" : YES 00:02:59.027 Library fdt found: NO 00:02:59.027 Library execinfo found: NO 00:02:59.027 Has header "execinfo.h" : YES 00:02:59.027 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.027 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.027 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.027 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.027 Run-time dependency openssl found: YES 3.1.1 00:02:59.027 Run-time dependency libpcap found: YES 1.10.4 00:02:59.027 Has header "pcap.h" with dependency libpcap: YES 00:02:59.027 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.027 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.027 Compiler for C supports arguments -Wformat: YES 00:02:59.027 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.027 Compiler for C supports arguments -Wformat-security: NO 00:02:59.027 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.027 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.027 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.027 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.027 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.027 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.027 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.027 Compiler for C supports arguments -Wundef: YES 00:02:59.027 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.027 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.027 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.027 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.027 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:59.027 Program objdump found: YES (/usr/bin/objdump) 00:02:59.027 Compiler for C supports arguments -mavx512f: YES 00:02:59.027 Checking if "AVX512 checking" compiles: YES 00:02:59.027 Fetching value of define "__SSE4_2__" : 1 00:02:59.027 Fetching value of define "__AES__" : 1 00:02:59.027 Fetching value of define "__AVX__" : 1 00:02:59.027 Fetching value of define "__AVX2__" : 1 00:02:59.027 Fetching value of define "__AVX512BW__" : 1 00:02:59.027 Fetching value of define "__AVX512CD__" : 1 00:02:59.027 Fetching value of define "__AVX512DQ__" : 1 00:02:59.027 Fetching value of define "__AVX512F__" : 1 00:02:59.027 Fetching value of define "__AVX512VL__" : 1 00:02:59.027 Fetching value of define "__PCLMUL__" : 1 00:02:59.027 Fetching value of define "__RDRND__" : 1 00:02:59.027 Fetching value of define "__RDSEED__" : 1 00:02:59.027 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:59.027 Fetching value of define "__znver1__" : (undefined) 00:02:59.027 Fetching value of define "__znver2__" : (undefined) 00:02:59.027 Fetching value of define "__znver3__" : (undefined) 00:02:59.027 Fetching value of define "__znver4__" : (undefined) 00:02:59.027 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.027 Message: lib/log: Defining dependency "log" 00:02:59.027 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.027 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.027 Checking for function "getentropy" : NO 00:02:59.027 Message: lib/eal: Defining dependency "eal" 00:02:59.027 Message: lib/ring: Defining dependency "ring" 00:02:59.027 Message: lib/rcu: Defining dependency "rcu" 00:02:59.027 Message: lib/mempool: Defining dependency "mempool" 00:02:59.027 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.027 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.027 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.027 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.027 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.027 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.027 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:59.027 Compiler for C supports arguments -mpclmul: YES 00:02:59.027 Compiler for C supports arguments -maes: YES 00:02:59.027 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.027 Compiler for C supports arguments -mavx512bw: YES 00:02:59.027 Compiler for C supports arguments -mavx512dq: YES 00:02:59.027 Compiler for C supports arguments -mavx512vl: YES 00:02:59.027 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.027 Compiler for C supports arguments -mavx2: YES 00:02:59.027 Compiler for C supports arguments -mavx: YES 00:02:59.027 Message: lib/net: Defining dependency "net" 00:02:59.027 Message: lib/meter: Defining dependency "meter" 00:02:59.027 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.027 Message: lib/pci: Defining dependency "pci" 00:02:59.027 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.027 Message: lib/hash: Defining dependency "hash" 00:02:59.027 Message: lib/timer: Defining dependency "timer" 00:02:59.027 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.027 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.027 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.027 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.027 Message: lib/power: Defining dependency "power" 00:02:59.027 Message: lib/reorder: Defining dependency "reorder" 00:02:59.027 Message: lib/security: Defining dependency "security" 00:02:59.027 Has header "linux/userfaultfd.h" : YES 00:02:59.027 Has header "linux/vduse.h" : YES 00:02:59.027 Message: lib/vhost: Defining dependency "vhost" 00:02:59.027 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.027 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.027 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.027 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.027 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:59.027 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:59.027 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:59.027 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:59.027 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:59.027 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:59.027 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:59.027 Configuring doxy-api-html.conf using configuration 00:02:59.027 Configuring doxy-api-man.conf using configuration 00:02:59.027 Program mandb found: YES (/usr/bin/mandb) 00:02:59.027 Program sphinx-build found: NO 00:02:59.027 Configuring rte_build_config.h using configuration 00:02:59.027 Message: 00:02:59.027 ================= 00:02:59.027 Applications Enabled 00:02:59.027 ================= 00:02:59.027 00:02:59.027 apps: 00:02:59.027 00:02:59.027 00:02:59.027 Message: 00:02:59.027 ================= 00:02:59.027 Libraries Enabled 00:02:59.027 ================= 00:02:59.027 00:02:59.027 libs: 00:02:59.028 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:59.028 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:59.028 cryptodev, dmadev, power, reorder, security, vhost, 00:02:59.028 00:02:59.028 Message: 00:02:59.028 =============== 00:02:59.028 Drivers Enabled 00:02:59.028 =============== 00:02:59.028 00:02:59.028 common: 00:02:59.028 00:02:59.028 bus: 00:02:59.028 pci, vdev, 00:02:59.028 mempool: 00:02:59.028 ring, 00:02:59.028 dma: 00:02:59.028 00:02:59.028 net: 00:02:59.028 00:02:59.028 crypto: 00:02:59.028 00:02:59.028 compress: 00:02:59.028 00:02:59.028 vdpa: 00:02:59.028 00:02:59.028 00:02:59.028 Message: 00:02:59.028 ================= 00:02:59.028 Content Skipped 00:02:59.028 ================= 00:02:59.028 00:02:59.028 apps: 00:02:59.028 dumpcap: explicitly disabled via build config 00:02:59.028 graph: explicitly disabled via build config 00:02:59.028 pdump: explicitly disabled via build config 00:02:59.028 proc-info: explicitly disabled via build config 00:02:59.028 test-acl: explicitly disabled via build config 00:02:59.028 test-bbdev: explicitly disabled via build config 00:02:59.028 test-cmdline: explicitly disabled via build config 00:02:59.028 test-compress-perf: explicitly disabled via build config 00:02:59.028 test-crypto-perf: explicitly disabled via build config 00:02:59.028 test-dma-perf: explicitly disabled via build config 00:02:59.028 test-eventdev: explicitly disabled via build config 00:02:59.028 test-fib: explicitly disabled via build config 00:02:59.028 test-flow-perf: explicitly disabled via build config 00:02:59.028 test-gpudev: explicitly disabled via build config 00:02:59.028 test-mldev: explicitly disabled via build config 00:02:59.028 test-pipeline: explicitly disabled via build config 00:02:59.028 test-pmd: explicitly disabled via build config 00:02:59.028 test-regex: explicitly disabled via build config 00:02:59.028 test-sad: explicitly disabled via build config 00:02:59.028 test-security-perf: explicitly disabled via build config 00:02:59.028 00:02:59.028 libs: 00:02:59.028 argparse: explicitly disabled via build config 00:02:59.028 metrics: explicitly disabled via build config 00:02:59.028 acl: explicitly disabled via build config 00:02:59.028 bbdev: explicitly disabled via build config 00:02:59.028 bitratestats: explicitly disabled via build config 00:02:59.028 bpf: explicitly disabled via build config 00:02:59.028 cfgfile: explicitly disabled via build config 00:02:59.028 distributor: explicitly disabled via build config 00:02:59.028 efd: explicitly disabled via build config 00:02:59.028 eventdev: explicitly disabled via build config 00:02:59.028 dispatcher: explicitly disabled via build config 00:02:59.028 gpudev: explicitly disabled via build config 00:02:59.028 gro: explicitly disabled via build config 00:02:59.028 gso: explicitly disabled via build config 00:02:59.028 ip_frag: explicitly disabled via build config 00:02:59.028 jobstats: explicitly disabled via build config 00:02:59.028 latencystats: explicitly disabled via build config 00:02:59.028 lpm: explicitly disabled via build config 00:02:59.028 member: explicitly disabled via build config 00:02:59.028 pcapng: explicitly disabled via build config 00:02:59.028 rawdev: explicitly disabled via build config 00:02:59.028 regexdev: explicitly disabled via build config 00:02:59.028 mldev: explicitly disabled via build config 00:02:59.028 rib: explicitly disabled via build config 00:02:59.028 sched: explicitly disabled via build config 00:02:59.028 stack: explicitly disabled via build config 00:02:59.028 ipsec: explicitly disabled via build config 00:02:59.028 pdcp: explicitly disabled via build config 00:02:59.028 fib: explicitly disabled via build config 00:02:59.028 port: explicitly disabled via build config 00:02:59.028 pdump: explicitly disabled via build config 00:02:59.028 table: explicitly disabled via build config 00:02:59.028 pipeline: explicitly disabled via build config 00:02:59.028 graph: explicitly disabled via build config 00:02:59.028 node: explicitly disabled via build config 00:02:59.028 00:02:59.028 drivers: 00:02:59.028 common/cpt: not in enabled drivers build config 00:02:59.028 common/dpaax: not in enabled drivers build config 00:02:59.028 common/iavf: not in enabled drivers build config 00:02:59.028 common/idpf: not in enabled drivers build config 00:02:59.028 common/ionic: not in enabled drivers build config 00:02:59.028 common/mvep: not in enabled drivers build config 00:02:59.028 common/octeontx: not in enabled drivers build config 00:02:59.028 bus/auxiliary: not in enabled drivers build config 00:02:59.028 bus/cdx: not in enabled drivers build config 00:02:59.028 bus/dpaa: not in enabled drivers build config 00:02:59.028 bus/fslmc: not in enabled drivers build config 00:02:59.028 bus/ifpga: not in enabled drivers build config 00:02:59.028 bus/platform: not in enabled drivers build config 00:02:59.028 bus/uacce: not in enabled drivers build config 00:02:59.028 bus/vmbus: not in enabled drivers build config 00:02:59.028 common/cnxk: not in enabled drivers build config 00:02:59.028 common/mlx5: not in enabled drivers build config 00:02:59.028 common/nfp: not in enabled drivers build config 00:02:59.028 common/nitrox: not in enabled drivers build config 00:02:59.028 common/qat: not in enabled drivers build config 00:02:59.028 common/sfc_efx: not in enabled drivers build config 00:02:59.028 mempool/bucket: not in enabled drivers build config 00:02:59.028 mempool/cnxk: not in enabled drivers build config 00:02:59.028 mempool/dpaa: not in enabled drivers build config 00:02:59.028 mempool/dpaa2: not in enabled drivers build config 00:02:59.028 mempool/octeontx: not in enabled drivers build config 00:02:59.028 mempool/stack: not in enabled drivers build config 00:02:59.028 dma/cnxk: not in enabled drivers build config 00:02:59.028 dma/dpaa: not in enabled drivers build config 00:02:59.028 dma/dpaa2: not in enabled drivers build config 00:02:59.028 dma/hisilicon: not in enabled drivers build config 00:02:59.028 dma/idxd: not in enabled drivers build config 00:02:59.028 dma/ioat: not in enabled drivers build config 00:02:59.028 dma/skeleton: not in enabled drivers build config 00:02:59.028 net/af_packet: not in enabled drivers build config 00:02:59.028 net/af_xdp: not in enabled drivers build config 00:02:59.028 net/ark: not in enabled drivers build config 00:02:59.028 net/atlantic: not in enabled drivers build config 00:02:59.028 net/avp: not in enabled drivers build config 00:02:59.028 net/axgbe: not in enabled drivers build config 00:02:59.028 net/bnx2x: not in enabled drivers build config 00:02:59.028 net/bnxt: not in enabled drivers build config 00:02:59.028 net/bonding: not in enabled drivers build config 00:02:59.028 net/cnxk: not in enabled drivers build config 00:02:59.028 net/cpfl: not in enabled drivers build config 00:02:59.028 net/cxgbe: not in enabled drivers build config 00:02:59.028 net/dpaa: not in enabled drivers build config 00:02:59.028 net/dpaa2: not in enabled drivers build config 00:02:59.028 net/e1000: not in enabled drivers build config 00:02:59.028 net/ena: not in enabled drivers build config 00:02:59.028 net/enetc: not in enabled drivers build config 00:02:59.028 net/enetfec: not in enabled drivers build config 00:02:59.028 net/enic: not in enabled drivers build config 00:02:59.028 net/failsafe: not in enabled drivers build config 00:02:59.028 net/fm10k: not in enabled drivers build config 00:02:59.028 net/gve: not in enabled drivers build config 00:02:59.028 net/hinic: not in enabled drivers build config 00:02:59.028 net/hns3: not in enabled drivers build config 00:02:59.028 net/i40e: not in enabled drivers build config 00:02:59.028 net/iavf: not in enabled drivers build config 00:02:59.028 net/ice: not in enabled drivers build config 00:02:59.028 net/idpf: not in enabled drivers build config 00:02:59.028 net/igc: not in enabled drivers build config 00:02:59.028 net/ionic: not in enabled drivers build config 00:02:59.028 net/ipn3ke: not in enabled drivers build config 00:02:59.028 net/ixgbe: not in enabled drivers build config 00:02:59.028 net/mana: not in enabled drivers build config 00:02:59.028 net/memif: not in enabled drivers build config 00:02:59.028 net/mlx4: not in enabled drivers build config 00:02:59.028 net/mlx5: not in enabled drivers build config 00:02:59.028 net/mvneta: not in enabled drivers build config 00:02:59.028 net/mvpp2: not in enabled drivers build config 00:02:59.028 net/netvsc: not in enabled drivers build config 00:02:59.028 net/nfb: not in enabled drivers build config 00:02:59.028 net/nfp: not in enabled drivers build config 00:02:59.028 net/ngbe: not in enabled drivers build config 00:02:59.028 net/null: not in enabled drivers build config 00:02:59.028 net/octeontx: not in enabled drivers build config 00:02:59.028 net/octeon_ep: not in enabled drivers build config 00:02:59.028 net/pcap: not in enabled drivers build config 00:02:59.028 net/pfe: not in enabled drivers build config 00:02:59.028 net/qede: not in enabled drivers build config 00:02:59.028 net/ring: not in enabled drivers build config 00:02:59.028 net/sfc: not in enabled drivers build config 00:02:59.028 net/softnic: not in enabled drivers build config 00:02:59.028 net/tap: not in enabled drivers build config 00:02:59.028 net/thunderx: not in enabled drivers build config 00:02:59.028 net/txgbe: not in enabled drivers build config 00:02:59.028 net/vdev_netvsc: not in enabled drivers build config 00:02:59.028 net/vhost: not in enabled drivers build config 00:02:59.028 net/virtio: not in enabled drivers build config 00:02:59.028 net/vmxnet3: not in enabled drivers build config 00:02:59.028 raw/*: missing internal dependency, "rawdev" 00:02:59.028 crypto/armv8: not in enabled drivers build config 00:02:59.028 crypto/bcmfs: not in enabled drivers build config 00:02:59.028 crypto/caam_jr: not in enabled drivers build config 00:02:59.028 crypto/ccp: not in enabled drivers build config 00:02:59.028 crypto/cnxk: not in enabled drivers build config 00:02:59.028 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.028 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.028 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.028 crypto/mlx5: not in enabled drivers build config 00:02:59.028 crypto/mvsam: not in enabled drivers build config 00:02:59.028 crypto/nitrox: not in enabled drivers build config 00:02:59.028 crypto/null: not in enabled drivers build config 00:02:59.028 crypto/octeontx: not in enabled drivers build config 00:02:59.029 crypto/openssl: not in enabled drivers build config 00:02:59.029 crypto/scheduler: not in enabled drivers build config 00:02:59.029 crypto/uadk: not in enabled drivers build config 00:02:59.029 crypto/virtio: not in enabled drivers build config 00:02:59.029 compress/isal: not in enabled drivers build config 00:02:59.029 compress/mlx5: not in enabled drivers build config 00:02:59.029 compress/nitrox: not in enabled drivers build config 00:02:59.029 compress/octeontx: not in enabled drivers build config 00:02:59.029 compress/zlib: not in enabled drivers build config 00:02:59.029 regex/*: missing internal dependency, "regexdev" 00:02:59.029 ml/*: missing internal dependency, "mldev" 00:02:59.029 vdpa/ifc: not in enabled drivers build config 00:02:59.029 vdpa/mlx5: not in enabled drivers build config 00:02:59.029 vdpa/nfp: not in enabled drivers build config 00:02:59.029 vdpa/sfc: not in enabled drivers build config 00:02:59.029 event/*: missing internal dependency, "eventdev" 00:02:59.029 baseband/*: missing internal dependency, "bbdev" 00:02:59.029 gpu/*: missing internal dependency, "gpudev" 00:02:59.029 00:02:59.029 00:02:59.029 Build targets in project: 85 00:02:59.029 00:02:59.029 DPDK 24.03.0 00:02:59.029 00:02:59.029 User defined options 00:02:59.029 buildtype : debug 00:02:59.029 default_library : shared 00:02:59.029 libdir : lib 00:02:59.029 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:59.029 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:59.029 c_link_args : 00:02:59.029 cpu_instruction_set: native 00:02:59.029 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:59.029 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:59.029 enable_docs : false 00:02:59.029 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:59.029 enable_kmods : false 00:02:59.029 max_lcores : 128 00:02:59.029 tests : false 00:02:59.029 00:02:59.029 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.294 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:59.559 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:59.559 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:59.559 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:59.559 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.559 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:59.559 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.559 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:59.559 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:59.559 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.559 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:59.559 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:59.559 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:59.559 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:59.559 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.559 [15/268] Linking static target lib/librte_kvargs.a 00:02:59.559 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.559 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.559 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:59.559 [19/268] Linking static target lib/librte_log.a 00:02:59.817 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.817 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.817 [22/268] Linking static target lib/librte_pci.a 00:02:59.817 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.817 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.817 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.075 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:00.075 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.075 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:00.075 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:00.075 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:00.075 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:00.075 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:00.075 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:00.075 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:00.075 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:00.075 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:00.075 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:00.075 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:00.075 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:00.075 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.075 [41/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.075 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:00.075 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:00.075 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.075 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:00.075 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:00.075 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:00.075 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:00.075 [49/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.075 [50/268] Linking static target lib/librte_meter.a 00:03:00.075 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:00.075 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.076 [53/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:00.076 [54/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:00.076 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:00.076 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:00.076 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:00.076 [58/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.076 [59/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:00.076 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.076 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.076 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:00.076 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:00.076 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:00.076 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:00.076 [66/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.076 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:00.076 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:00.076 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:00.076 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:00.076 [71/268] Linking static target lib/librte_ring.a 00:03:00.076 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:00.076 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:00.076 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:00.076 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:00.076 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.076 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:00.076 [78/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.076 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:00.076 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:00.076 [81/268] Linking static target lib/librte_telemetry.a 00:03:00.076 [82/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:00.076 [83/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.076 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.076 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.076 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:00.076 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:00.076 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.076 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:00.076 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:00.076 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:00.076 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.076 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.076 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:00.076 [95/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.076 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:00.076 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:00.076 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:00.076 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.076 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.076 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.076 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:00.076 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:00.076 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:00.076 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.334 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:00.334 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.334 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:00.334 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:00.334 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:00.334 [111/268] Linking static target lib/librte_mempool.a 00:03:00.334 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:00.334 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.334 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:00.334 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:00.334 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.334 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:00.334 [118/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.334 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.334 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.334 [121/268] Linking static target lib/librte_eal.a 00:03:00.334 [122/268] Linking static target lib/librte_net.a 00:03:00.334 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:00.334 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.334 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.334 [126/268] Linking static target lib/librte_rcu.a 00:03:00.334 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.334 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:00.334 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:00.334 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.334 [131/268] Linking static target lib/librte_cmdline.a 00:03:00.334 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:00.334 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.334 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.334 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.334 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:00.334 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.334 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.334 [139/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.334 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.334 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.334 [142/268] Linking target lib/librte_log.so.24.1 00:03:00.334 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.334 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.592 [145/268] Linking static target lib/librte_mbuf.a 00:03:00.592 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.592 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.592 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.592 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.592 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.592 [151/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.592 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.592 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.592 [154/268] Linking static target lib/librte_timer.a 00:03:00.592 [155/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.592 [156/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.592 [157/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.592 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.592 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.592 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:00.592 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:00.592 [162/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:00.592 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.592 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.592 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:00.592 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.592 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.592 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.592 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.592 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.592 [171/268] Linking target lib/librte_telemetry.so.24.1 00:03:00.592 [172/268] Linking target lib/librte_kvargs.so.24.1 00:03:00.592 [173/268] Linking static target lib/librte_dmadev.a 00:03:00.592 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.592 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:00.592 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.592 [177/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.592 [178/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.592 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.592 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:00.592 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.592 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.851 [183/268] Linking static target lib/librte_compressdev.a 00:03:00.851 [184/268] Linking static target lib/librte_power.a 00:03:00.851 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:00.851 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.851 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.851 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:00.851 [189/268] Linking static target lib/librte_security.a 00:03:00.851 [190/268] Linking static target lib/librte_reorder.a 00:03:00.851 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.851 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:00.851 [193/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.851 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.851 [195/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:00.851 [196/268] Linking static target drivers/librte_bus_vdev.a 00:03:00.851 [197/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:00.851 [198/268] Linking static target lib/librte_hash.a 00:03:00.851 [199/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:00.851 [200/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.851 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:00.851 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.851 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.851 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:00.851 [205/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.851 [206/268] Linking static target drivers/librte_mempool_ring.a 00:03:00.851 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.109 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.109 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.109 [210/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.109 [211/268] Linking static target drivers/librte_bus_pci.a 00:03:01.109 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.109 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.109 [214/268] Linking static target lib/librte_cryptodev.a 00:03:01.109 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.109 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.109 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.369 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:01.369 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.369 [220/268] Linking static target lib/librte_ethdev.a 00:03:01.369 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.369 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.369 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.628 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:01.628 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.628 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.628 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.004 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.004 [229/268] Linking static target lib/librte_vhost.a 00:03:03.004 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.906 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.185 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.185 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.185 [234/268] Linking target lib/librte_eal.so.24.1 00:03:10.444 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:10.444 [236/268] Linking target lib/librte_pci.so.24.1 00:03:10.444 [237/268] Linking target lib/librte_ring.so.24.1 00:03:10.444 [238/268] Linking target lib/librte_timer.so.24.1 00:03:10.444 [239/268] Linking target lib/librte_meter.so.24.1 00:03:10.444 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:10.444 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:10.444 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:10.444 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:10.444 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:10.444 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:10.444 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:10.444 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:10.444 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:10.444 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:10.702 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:10.702 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:10.702 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:10.702 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:10.961 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.961 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:10.961 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:10.961 [257/268] Linking target lib/librte_net.so.24.1 00:03:10.961 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:10.961 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.961 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.961 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:10.961 [262/268] Linking target lib/librte_hash.so.24.1 00:03:10.961 [263/268] Linking target lib/librte_security.so.24.1 00:03:11.219 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:11.219 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:11.219 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:11.219 [267/268] Linking target lib/librte_power.so.24.1 00:03:11.219 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:11.219 INFO: autodetecting backend as ninja 00:03:11.219 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:23.440 CC lib/ut_mock/mock.o 00:03:23.440 CC lib/log/log.o 00:03:23.440 CC lib/log/log_flags.o 00:03:23.440 CC lib/log/log_deprecated.o 00:03:23.440 CC lib/ut/ut.o 00:03:23.440 LIB libspdk_log.a 00:03:23.440 LIB libspdk_ut_mock.a 00:03:23.440 LIB libspdk_ut.a 00:03:23.440 SO libspdk_log.so.7.1 00:03:23.440 SO libspdk_ut_mock.so.6.0 00:03:23.440 SO libspdk_ut.so.2.0 00:03:23.440 SYMLINK libspdk_ut.so 00:03:23.440 SYMLINK libspdk_ut_mock.so 00:03:23.440 SYMLINK libspdk_log.so 00:03:23.440 CC lib/ioat/ioat.o 00:03:23.440 CC lib/util/base64.o 00:03:23.440 CC lib/util/bit_array.o 00:03:23.440 CC lib/util/cpuset.o 00:03:23.440 CC lib/util/crc16.o 00:03:23.440 CC lib/dma/dma.o 00:03:23.440 CC lib/util/crc32.o 00:03:23.440 CXX lib/trace_parser/trace.o 00:03:23.440 CC lib/util/crc32c.o 00:03:23.440 CC lib/util/crc32_ieee.o 00:03:23.440 CC lib/util/crc64.o 00:03:23.440 CC lib/util/dif.o 00:03:23.440 CC lib/util/fd.o 00:03:23.440 CC lib/util/fd_group.o 00:03:23.440 CC lib/util/file.o 00:03:23.440 CC lib/util/hexlify.o 00:03:23.440 CC lib/util/iov.o 00:03:23.440 CC lib/util/math.o 00:03:23.440 CC lib/util/net.o 00:03:23.440 CC lib/util/pipe.o 00:03:23.440 CC lib/util/strerror_tls.o 00:03:23.440 CC lib/util/string.o 00:03:23.440 CC lib/util/uuid.o 00:03:23.440 CC lib/util/xor.o 00:03:23.440 CC lib/util/zipf.o 00:03:23.440 CC lib/util/md5.o 00:03:23.440 CC lib/vfio_user/host/vfio_user_pci.o 00:03:23.440 CC lib/vfio_user/host/vfio_user.o 00:03:23.440 LIB libspdk_dma.a 00:03:23.440 SO libspdk_dma.so.5.0 00:03:23.440 LIB libspdk_ioat.a 00:03:23.440 SO libspdk_ioat.so.7.0 00:03:23.440 SYMLINK libspdk_dma.so 00:03:23.440 SYMLINK libspdk_ioat.so 00:03:23.440 LIB libspdk_vfio_user.a 00:03:23.440 SO libspdk_vfio_user.so.5.0 00:03:23.440 SYMLINK libspdk_vfio_user.so 00:03:23.440 LIB libspdk_util.a 00:03:23.440 SO libspdk_util.so.10.1 00:03:23.440 SYMLINK libspdk_util.so 00:03:23.440 LIB libspdk_trace_parser.a 00:03:23.440 SO libspdk_trace_parser.so.6.0 00:03:23.440 SYMLINK libspdk_trace_parser.so 00:03:23.440 CC lib/json/json_parse.o 00:03:23.440 CC lib/json/json_util.o 00:03:23.440 CC lib/json/json_write.o 00:03:23.440 CC lib/vmd/vmd.o 00:03:23.440 CC lib/vmd/led.o 00:03:23.440 CC lib/idxd/idxd.o 00:03:23.440 CC lib/conf/conf.o 00:03:23.440 CC lib/env_dpdk/env.o 00:03:23.440 CC lib/env_dpdk/memory.o 00:03:23.440 CC lib/idxd/idxd_user.o 00:03:23.440 CC lib/idxd/idxd_kernel.o 00:03:23.440 CC lib/env_dpdk/pci.o 00:03:23.440 CC lib/env_dpdk/init.o 00:03:23.440 CC lib/rdma_utils/rdma_utils.o 00:03:23.440 CC lib/env_dpdk/threads.o 00:03:23.440 CC lib/env_dpdk/pci_ioat.o 00:03:23.440 CC lib/env_dpdk/pci_virtio.o 00:03:23.440 CC lib/env_dpdk/pci_vmd.o 00:03:23.440 CC lib/env_dpdk/pci_idxd.o 00:03:23.440 CC lib/env_dpdk/pci_event.o 00:03:23.440 CC lib/env_dpdk/sigbus_handler.o 00:03:23.440 CC lib/env_dpdk/pci_dpdk.o 00:03:23.440 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:23.440 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:23.440 LIB libspdk_conf.a 00:03:23.698 LIB libspdk_rdma_utils.a 00:03:23.698 SO libspdk_conf.so.6.0 00:03:23.698 LIB libspdk_json.a 00:03:23.698 SO libspdk_rdma_utils.so.1.0 00:03:23.698 SO libspdk_json.so.6.0 00:03:23.698 SYMLINK libspdk_conf.so 00:03:23.698 SYMLINK libspdk_rdma_utils.so 00:03:23.698 SYMLINK libspdk_json.so 00:03:23.698 LIB libspdk_idxd.a 00:03:23.956 SO libspdk_idxd.so.12.1 00:03:23.956 LIB libspdk_vmd.a 00:03:23.956 SO libspdk_vmd.so.6.0 00:03:23.956 SYMLINK libspdk_idxd.so 00:03:23.956 SYMLINK libspdk_vmd.so 00:03:23.956 CC lib/rdma_provider/common.o 00:03:23.956 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:23.956 CC lib/jsonrpc/jsonrpc_server.o 00:03:23.956 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:23.956 CC lib/jsonrpc/jsonrpc_client.o 00:03:23.956 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:24.214 LIB libspdk_rdma_provider.a 00:03:24.214 SO libspdk_rdma_provider.so.7.0 00:03:24.214 LIB libspdk_jsonrpc.a 00:03:24.214 SO libspdk_jsonrpc.so.6.0 00:03:24.214 SYMLINK libspdk_rdma_provider.so 00:03:24.214 SYMLINK libspdk_jsonrpc.so 00:03:24.473 LIB libspdk_env_dpdk.a 00:03:24.473 SO libspdk_env_dpdk.so.15.1 00:03:24.473 SYMLINK libspdk_env_dpdk.so 00:03:24.731 CC lib/rpc/rpc.o 00:03:24.731 LIB libspdk_rpc.a 00:03:24.731 SO libspdk_rpc.so.6.0 00:03:24.990 SYMLINK libspdk_rpc.so 00:03:25.248 CC lib/keyring/keyring.o 00:03:25.248 CC lib/keyring/keyring_rpc.o 00:03:25.248 CC lib/trace/trace.o 00:03:25.248 CC lib/trace/trace_flags.o 00:03:25.248 CC lib/trace/trace_rpc.o 00:03:25.248 CC lib/notify/notify.o 00:03:25.248 CC lib/notify/notify_rpc.o 00:03:25.506 LIB libspdk_notify.a 00:03:25.506 SO libspdk_notify.so.6.0 00:03:25.506 LIB libspdk_keyring.a 00:03:25.506 LIB libspdk_trace.a 00:03:25.506 SO libspdk_keyring.so.2.0 00:03:25.506 SYMLINK libspdk_notify.so 00:03:25.506 SO libspdk_trace.so.11.0 00:03:25.506 SYMLINK libspdk_keyring.so 00:03:25.506 SYMLINK libspdk_trace.so 00:03:25.766 CC lib/sock/sock.o 00:03:25.766 CC lib/sock/sock_rpc.o 00:03:25.766 CC lib/thread/thread.o 00:03:25.766 CC lib/thread/iobuf.o 00:03:26.333 LIB libspdk_sock.a 00:03:26.333 SO libspdk_sock.so.10.0 00:03:26.333 SYMLINK libspdk_sock.so 00:03:26.591 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:26.591 CC lib/nvme/nvme_ctrlr.o 00:03:26.591 CC lib/nvme/nvme_fabric.o 00:03:26.591 CC lib/nvme/nvme_ns_cmd.o 00:03:26.591 CC lib/nvme/nvme_ns.o 00:03:26.591 CC lib/nvme/nvme_pcie_common.o 00:03:26.591 CC lib/nvme/nvme_pcie.o 00:03:26.591 CC lib/nvme/nvme_qpair.o 00:03:26.592 CC lib/nvme/nvme.o 00:03:26.592 CC lib/nvme/nvme_quirks.o 00:03:26.592 CC lib/nvme/nvme_transport.o 00:03:26.592 CC lib/nvme/nvme_discovery.o 00:03:26.592 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:26.592 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:26.592 CC lib/nvme/nvme_tcp.o 00:03:26.592 CC lib/nvme/nvme_opal.o 00:03:26.592 CC lib/nvme/nvme_io_msg.o 00:03:26.592 CC lib/nvme/nvme_poll_group.o 00:03:26.592 CC lib/nvme/nvme_zns.o 00:03:26.592 CC lib/nvme/nvme_stubs.o 00:03:26.592 CC lib/nvme/nvme_auth.o 00:03:26.592 CC lib/nvme/nvme_cuse.o 00:03:26.592 CC lib/nvme/nvme_vfio_user.o 00:03:26.592 CC lib/nvme/nvme_rdma.o 00:03:26.850 LIB libspdk_thread.a 00:03:27.108 SO libspdk_thread.so.11.0 00:03:27.108 SYMLINK libspdk_thread.so 00:03:27.374 CC lib/blob/blobstore.o 00:03:27.374 CC lib/blob/request.o 00:03:27.374 CC lib/blob/zeroes.o 00:03:27.374 CC lib/virtio/virtio_vhost_user.o 00:03:27.374 CC lib/blob/blob_bs_dev.o 00:03:27.374 CC lib/virtio/virtio.o 00:03:27.374 CC lib/init/json_config.o 00:03:27.374 CC lib/virtio/virtio_vfio_user.o 00:03:27.374 CC lib/init/subsystem_rpc.o 00:03:27.374 CC lib/init/subsystem.o 00:03:27.374 CC lib/virtio/virtio_pci.o 00:03:27.374 CC lib/init/rpc.o 00:03:27.374 CC lib/accel/accel.o 00:03:27.374 CC lib/accel/accel_rpc.o 00:03:27.374 CC lib/accel/accel_sw.o 00:03:27.374 CC lib/vfu_tgt/tgt_endpoint.o 00:03:27.374 CC lib/vfu_tgt/tgt_rpc.o 00:03:27.374 CC lib/fsdev/fsdev.o 00:03:27.374 CC lib/fsdev/fsdev_io.o 00:03:27.374 CC lib/fsdev/fsdev_rpc.o 00:03:27.632 LIB libspdk_init.a 00:03:27.632 SO libspdk_init.so.6.0 00:03:27.632 LIB libspdk_virtio.a 00:03:27.632 LIB libspdk_vfu_tgt.a 00:03:27.632 SO libspdk_virtio.so.7.0 00:03:27.632 SYMLINK libspdk_init.so 00:03:27.632 SO libspdk_vfu_tgt.so.3.0 00:03:27.632 SYMLINK libspdk_virtio.so 00:03:27.632 SYMLINK libspdk_vfu_tgt.so 00:03:27.889 LIB libspdk_fsdev.a 00:03:27.889 SO libspdk_fsdev.so.2.0 00:03:27.889 CC lib/event/app.o 00:03:27.889 CC lib/event/reactor.o 00:03:27.889 CC lib/event/log_rpc.o 00:03:27.889 CC lib/event/app_rpc.o 00:03:27.889 CC lib/event/scheduler_static.o 00:03:27.889 SYMLINK libspdk_fsdev.so 00:03:28.147 LIB libspdk_accel.a 00:03:28.147 SO libspdk_accel.so.16.0 00:03:28.147 LIB libspdk_nvme.a 00:03:28.407 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:28.407 SYMLINK libspdk_accel.so 00:03:28.407 LIB libspdk_event.a 00:03:28.407 SO libspdk_nvme.so.15.0 00:03:28.407 SO libspdk_event.so.14.0 00:03:28.407 SYMLINK libspdk_event.so 00:03:28.666 SYMLINK libspdk_nvme.so 00:03:28.666 CC lib/bdev/bdev.o 00:03:28.666 CC lib/bdev/bdev_rpc.o 00:03:28.666 CC lib/bdev/bdev_zone.o 00:03:28.666 CC lib/bdev/part.o 00:03:28.666 CC lib/bdev/scsi_nvme.o 00:03:28.666 LIB libspdk_fuse_dispatcher.a 00:03:28.666 SO libspdk_fuse_dispatcher.so.1.0 00:03:28.925 SYMLINK libspdk_fuse_dispatcher.so 00:03:29.492 LIB libspdk_blob.a 00:03:29.492 SO libspdk_blob.so.12.0 00:03:29.750 SYMLINK libspdk_blob.so 00:03:30.009 CC lib/lvol/lvol.o 00:03:30.009 CC lib/blobfs/blobfs.o 00:03:30.009 CC lib/blobfs/tree.o 00:03:30.576 LIB libspdk_bdev.a 00:03:30.576 SO libspdk_bdev.so.17.0 00:03:30.576 LIB libspdk_blobfs.a 00:03:30.576 SO libspdk_blobfs.so.11.0 00:03:30.576 SYMLINK libspdk_bdev.so 00:03:30.576 LIB libspdk_lvol.a 00:03:30.576 SO libspdk_lvol.so.11.0 00:03:30.576 SYMLINK libspdk_blobfs.so 00:03:30.576 SYMLINK libspdk_lvol.so 00:03:30.835 CC lib/scsi/dev.o 00:03:30.835 CC lib/scsi/lun.o 00:03:30.835 CC lib/scsi/port.o 00:03:30.835 CC lib/scsi/scsi_bdev.o 00:03:30.835 CC lib/scsi/scsi.o 00:03:30.835 CC lib/scsi/scsi_pr.o 00:03:30.835 CC lib/scsi/scsi_rpc.o 00:03:30.835 CC lib/scsi/task.o 00:03:30.835 CC lib/nbd/nbd.o 00:03:30.835 CC lib/nbd/nbd_rpc.o 00:03:30.835 CC lib/ftl/ftl_core.o 00:03:30.835 CC lib/nvmf/ctrlr.o 00:03:30.835 CC lib/ftl/ftl_init.o 00:03:30.835 CC lib/ftl/ftl_layout.o 00:03:30.835 CC lib/nvmf/ctrlr_discovery.o 00:03:30.835 CC lib/ftl/ftl_debug.o 00:03:30.835 CC lib/nvmf/ctrlr_bdev.o 00:03:30.835 CC lib/ublk/ublk.o 00:03:30.835 CC lib/ftl/ftl_io.o 00:03:30.835 CC lib/nvmf/subsystem.o 00:03:30.835 CC lib/ublk/ublk_rpc.o 00:03:30.835 CC lib/nvmf/nvmf.o 00:03:30.835 CC lib/ftl/ftl_sb.o 00:03:30.835 CC lib/ftl/ftl_l2p.o 00:03:30.835 CC lib/nvmf/nvmf_rpc.o 00:03:30.835 CC lib/ftl/ftl_l2p_flat.o 00:03:30.835 CC lib/nvmf/transport.o 00:03:30.835 CC lib/nvmf/tcp.o 00:03:30.835 CC lib/ftl/ftl_nv_cache.o 00:03:30.835 CC lib/ftl/ftl_band.o 00:03:30.835 CC lib/nvmf/stubs.o 00:03:30.835 CC lib/ftl/ftl_band_ops.o 00:03:30.835 CC lib/ftl/ftl_writer.o 00:03:30.835 CC lib/nvmf/mdns_server.o 00:03:30.835 CC lib/ftl/ftl_rq.o 00:03:30.835 CC lib/nvmf/vfio_user.o 00:03:30.835 CC lib/nvmf/rdma.o 00:03:30.835 CC lib/ftl/ftl_l2p_cache.o 00:03:30.835 CC lib/nvmf/auth.o 00:03:30.835 CC lib/ftl/ftl_p2l.o 00:03:30.835 CC lib/ftl/ftl_reloc.o 00:03:30.835 CC lib/ftl/ftl_p2l_log.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:30.835 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:30.835 CC lib/ftl/utils/ftl_conf.o 00:03:30.835 CC lib/ftl/utils/ftl_md.o 00:03:30.835 CC lib/ftl/utils/ftl_property.o 00:03:30.835 CC lib/ftl/utils/ftl_bitmap.o 00:03:30.835 CC lib/ftl/utils/ftl_mempool.o 00:03:30.835 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:30.835 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:30.835 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:30.835 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:30.835 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:30.835 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:30.835 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:30.835 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.835 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:30.835 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.835 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.835 CC lib/ftl/base/ftl_base_dev.o 00:03:30.835 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:30.835 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:30.835 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.835 CC lib/ftl/ftl_trace.o 00:03:31.402 LIB libspdk_scsi.a 00:03:31.402 LIB libspdk_nbd.a 00:03:31.658 SO libspdk_nbd.so.7.0 00:03:31.658 SO libspdk_scsi.so.9.0 00:03:31.658 SYMLINK libspdk_nbd.so 00:03:31.658 SYMLINK libspdk_scsi.so 00:03:31.658 LIB libspdk_ublk.a 00:03:31.658 SO libspdk_ublk.so.3.0 00:03:31.941 SYMLINK libspdk_ublk.so 00:03:31.941 LIB libspdk_ftl.a 00:03:31.941 CC lib/iscsi/conn.o 00:03:31.941 CC lib/vhost/vhost.o 00:03:31.941 CC lib/vhost/vhost_rpc.o 00:03:31.941 CC lib/iscsi/init_grp.o 00:03:31.941 CC lib/vhost/vhost_scsi.o 00:03:31.941 CC lib/iscsi/iscsi.o 00:03:31.941 CC lib/vhost/vhost_blk.o 00:03:31.941 CC lib/iscsi/param.o 00:03:31.941 CC lib/vhost/rte_vhost_user.o 00:03:31.941 CC lib/iscsi/portal_grp.o 00:03:31.941 CC lib/iscsi/tgt_node.o 00:03:31.941 CC lib/iscsi/iscsi_subsystem.o 00:03:31.941 CC lib/iscsi/iscsi_rpc.o 00:03:31.941 CC lib/iscsi/task.o 00:03:31.941 SO libspdk_ftl.so.9.0 00:03:32.200 SYMLINK libspdk_ftl.so 00:03:32.819 LIB libspdk_nvmf.a 00:03:32.819 SO libspdk_nvmf.so.20.0 00:03:32.819 LIB libspdk_vhost.a 00:03:32.819 SO libspdk_vhost.so.8.0 00:03:32.819 SYMLINK libspdk_nvmf.so 00:03:32.819 SYMLINK libspdk_vhost.so 00:03:33.192 LIB libspdk_iscsi.a 00:03:33.192 SO libspdk_iscsi.so.8.0 00:03:33.192 SYMLINK libspdk_iscsi.so 00:03:33.782 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.782 CC module/vfu_device/vfu_virtio_blk.o 00:03:33.782 CC module/vfu_device/vfu_virtio.o 00:03:33.782 CC module/vfu_device/vfu_virtio_scsi.o 00:03:33.782 CC module/vfu_device/vfu_virtio_rpc.o 00:03:33.782 CC module/vfu_device/vfu_virtio_fs.o 00:03:33.782 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.782 CC module/accel/dsa/accel_dsa.o 00:03:33.782 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:33.782 CC module/accel/ioat/accel_ioat.o 00:03:33.782 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.782 CC module/accel/iaa/accel_iaa.o 00:03:33.782 LIB libspdk_env_dpdk_rpc.a 00:03:33.782 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.782 CC module/accel/error/accel_error.o 00:03:33.782 CC module/keyring/file/keyring_rpc.o 00:03:33.782 CC module/accel/error/accel_error_rpc.o 00:03:33.782 CC module/keyring/file/keyring.o 00:03:33.782 CC module/blob/bdev/blob_bdev.o 00:03:33.782 CC module/sock/posix/posix.o 00:03:33.782 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:33.782 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.782 CC module/fsdev/aio/fsdev_aio.o 00:03:33.782 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:33.782 CC module/fsdev/aio/linux_aio_mgr.o 00:03:33.782 CC module/keyring/linux/keyring.o 00:03:33.782 CC module/keyring/linux/keyring_rpc.o 00:03:33.782 SO libspdk_env_dpdk_rpc.so.6.0 00:03:33.782 SYMLINK libspdk_env_dpdk_rpc.so 00:03:34.040 LIB libspdk_keyring_linux.a 00:03:34.040 LIB libspdk_scheduler_gscheduler.a 00:03:34.040 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.040 LIB libspdk_keyring_file.a 00:03:34.040 LIB libspdk_scheduler_dynamic.a 00:03:34.040 SO libspdk_keyring_linux.so.1.0 00:03:34.040 LIB libspdk_accel_ioat.a 00:03:34.040 SO libspdk_scheduler_gscheduler.so.4.0 00:03:34.040 LIB libspdk_accel_iaa.a 00:03:34.040 SO libspdk_keyring_file.so.2.0 00:03:34.040 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:34.040 LIB libspdk_accel_error.a 00:03:34.040 SO libspdk_scheduler_dynamic.so.4.0 00:03:34.040 SO libspdk_accel_ioat.so.6.0 00:03:34.040 SO libspdk_accel_iaa.so.3.0 00:03:34.040 SO libspdk_accel_error.so.2.0 00:03:34.040 SYMLINK libspdk_keyring_linux.so 00:03:34.040 LIB libspdk_blob_bdev.a 00:03:34.040 SYMLINK libspdk_keyring_file.so 00:03:34.040 SYMLINK libspdk_scheduler_gscheduler.so 00:03:34.040 LIB libspdk_accel_dsa.a 00:03:34.040 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.040 SO libspdk_blob_bdev.so.12.0 00:03:34.040 SYMLINK libspdk_scheduler_dynamic.so 00:03:34.040 SYMLINK libspdk_accel_iaa.so 00:03:34.040 SYMLINK libspdk_accel_ioat.so 00:03:34.040 SYMLINK libspdk_accel_error.so 00:03:34.040 SO libspdk_accel_dsa.so.5.0 00:03:34.040 SYMLINK libspdk_blob_bdev.so 00:03:34.298 SYMLINK libspdk_accel_dsa.so 00:03:34.298 LIB libspdk_vfu_device.a 00:03:34.298 SO libspdk_vfu_device.so.3.0 00:03:34.298 SYMLINK libspdk_vfu_device.so 00:03:34.298 LIB libspdk_fsdev_aio.a 00:03:34.298 SO libspdk_fsdev_aio.so.1.0 00:03:34.555 LIB libspdk_sock_posix.a 00:03:34.555 SO libspdk_sock_posix.so.6.0 00:03:34.555 SYMLINK libspdk_fsdev_aio.so 00:03:34.555 SYMLINK libspdk_sock_posix.so 00:03:34.555 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.555 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.555 CC module/bdev/error/vbdev_error.o 00:03:34.555 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.555 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.555 CC module/bdev/null/bdev_null.o 00:03:34.555 CC module/bdev/null/bdev_null_rpc.o 00:03:34.555 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.556 CC module/bdev/delay/vbdev_delay.o 00:03:34.556 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.556 CC module/bdev/aio/bdev_aio.o 00:03:34.556 CC module/bdev/nvme/bdev_nvme.o 00:03:34.556 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.556 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.556 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.556 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.556 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.556 CC module/bdev/nvme/nvme_rpc.o 00:03:34.556 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.556 CC module/bdev/malloc/bdev_malloc.o 00:03:34.556 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.556 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.556 CC module/bdev/gpt/gpt.o 00:03:34.556 CC module/bdev/ftl/bdev_ftl.o 00:03:34.556 CC module/bdev/nvme/vbdev_opal.o 00:03:34.556 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.556 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.556 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:34.556 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.556 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.556 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:34.556 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:34.556 CC module/bdev/split/vbdev_split.o 00:03:34.556 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.556 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.556 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.556 CC module/bdev/raid/bdev_raid.o 00:03:34.556 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.556 CC module/bdev/raid/bdev_raid_sb.o 00:03:34.556 CC module/bdev/raid/raid0.o 00:03:34.556 CC module/bdev/raid/concat.o 00:03:34.556 CC module/bdev/raid/raid1.o 00:03:34.814 LIB libspdk_blobfs_bdev.a 00:03:34.814 LIB libspdk_bdev_null.a 00:03:34.814 SO libspdk_blobfs_bdev.so.6.0 00:03:34.814 LIB libspdk_bdev_error.a 00:03:34.814 SO libspdk_bdev_null.so.6.0 00:03:34.814 LIB libspdk_bdev_split.a 00:03:34.814 SO libspdk_bdev_error.so.6.0 00:03:35.072 SO libspdk_bdev_split.so.6.0 00:03:35.072 SYMLINK libspdk_blobfs_bdev.so 00:03:35.072 LIB libspdk_bdev_passthru.a 00:03:35.072 SYMLINK libspdk_bdev_null.so 00:03:35.072 LIB libspdk_bdev_aio.a 00:03:35.072 LIB libspdk_bdev_gpt.a 00:03:35.072 LIB libspdk_bdev_ftl.a 00:03:35.072 LIB libspdk_bdev_delay.a 00:03:35.072 SO libspdk_bdev_passthru.so.6.0 00:03:35.072 SYMLINK libspdk_bdev_error.so 00:03:35.072 SYMLINK libspdk_bdev_split.so 00:03:35.072 SO libspdk_bdev_aio.so.6.0 00:03:35.072 SO libspdk_bdev_gpt.so.6.0 00:03:35.072 SO libspdk_bdev_ftl.so.6.0 00:03:35.072 LIB libspdk_bdev_malloc.a 00:03:35.072 SO libspdk_bdev_delay.so.6.0 00:03:35.072 LIB libspdk_bdev_zone_block.a 00:03:35.072 SYMLINK libspdk_bdev_passthru.so 00:03:35.072 LIB libspdk_bdev_iscsi.a 00:03:35.072 SO libspdk_bdev_malloc.so.6.0 00:03:35.072 SYMLINK libspdk_bdev_aio.so 00:03:35.072 SO libspdk_bdev_zone_block.so.6.0 00:03:35.072 SYMLINK libspdk_bdev_ftl.so 00:03:35.072 SYMLINK libspdk_bdev_gpt.so 00:03:35.072 SYMLINK libspdk_bdev_delay.so 00:03:35.072 LIB libspdk_bdev_lvol.a 00:03:35.072 SO libspdk_bdev_iscsi.so.6.0 00:03:35.072 SYMLINK libspdk_bdev_malloc.so 00:03:35.072 SYMLINK libspdk_bdev_zone_block.so 00:03:35.072 SO libspdk_bdev_lvol.so.6.0 00:03:35.072 LIB libspdk_bdev_virtio.a 00:03:35.072 SYMLINK libspdk_bdev_iscsi.so 00:03:35.072 SO libspdk_bdev_virtio.so.6.0 00:03:35.332 SYMLINK libspdk_bdev_lvol.so 00:03:35.332 SYMLINK libspdk_bdev_virtio.so 00:03:35.592 LIB libspdk_bdev_raid.a 00:03:35.592 SO libspdk_bdev_raid.so.6.0 00:03:35.592 SYMLINK libspdk_bdev_raid.so 00:03:36.529 LIB libspdk_bdev_nvme.a 00:03:36.529 SO libspdk_bdev_nvme.so.7.1 00:03:36.790 SYMLINK libspdk_bdev_nvme.so 00:03:37.358 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:37.358 CC module/event/subsystems/vmd/vmd.o 00:03:37.358 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.358 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.358 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.358 CC module/event/subsystems/keyring/keyring.o 00:03:37.358 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.358 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.358 CC module/event/subsystems/sock/sock.o 00:03:37.358 CC module/event/subsystems/fsdev/fsdev.o 00:03:37.358 LIB libspdk_event_scheduler.a 00:03:37.358 LIB libspdk_event_vfu_tgt.a 00:03:37.358 LIB libspdk_event_keyring.a 00:03:37.358 LIB libspdk_event_vmd.a 00:03:37.358 SO libspdk_event_scheduler.so.4.0 00:03:37.358 LIB libspdk_event_vhost_blk.a 00:03:37.358 LIB libspdk_event_fsdev.a 00:03:37.358 LIB libspdk_event_iobuf.a 00:03:37.358 LIB libspdk_event_sock.a 00:03:37.358 SO libspdk_event_vfu_tgt.so.3.0 00:03:37.358 SO libspdk_event_vmd.so.6.0 00:03:37.618 SO libspdk_event_keyring.so.1.0 00:03:37.618 SO libspdk_event_vhost_blk.so.3.0 00:03:37.618 SO libspdk_event_fsdev.so.1.0 00:03:37.618 SO libspdk_event_sock.so.5.0 00:03:37.618 SO libspdk_event_iobuf.so.3.0 00:03:37.618 SYMLINK libspdk_event_scheduler.so 00:03:37.618 SYMLINK libspdk_event_vfu_tgt.so 00:03:37.618 SYMLINK libspdk_event_keyring.so 00:03:37.618 SYMLINK libspdk_event_vmd.so 00:03:37.618 SYMLINK libspdk_event_vhost_blk.so 00:03:37.618 SYMLINK libspdk_event_fsdev.so 00:03:37.618 SYMLINK libspdk_event_sock.so 00:03:37.618 SYMLINK libspdk_event_iobuf.so 00:03:37.878 CC module/event/subsystems/accel/accel.o 00:03:38.137 LIB libspdk_event_accel.a 00:03:38.137 SO libspdk_event_accel.so.6.0 00:03:38.137 SYMLINK libspdk_event_accel.so 00:03:38.396 CC module/event/subsystems/bdev/bdev.o 00:03:38.655 LIB libspdk_event_bdev.a 00:03:38.655 SO libspdk_event_bdev.so.6.0 00:03:38.655 SYMLINK libspdk_event_bdev.so 00:03:38.914 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.914 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.914 CC module/event/subsystems/scsi/scsi.o 00:03:38.914 CC module/event/subsystems/ublk/ublk.o 00:03:38.914 CC module/event/subsystems/nbd/nbd.o 00:03:39.174 LIB libspdk_event_ublk.a 00:03:39.174 LIB libspdk_event_nbd.a 00:03:39.174 LIB libspdk_event_scsi.a 00:03:39.174 SO libspdk_event_ublk.so.3.0 00:03:39.174 SO libspdk_event_nbd.so.6.0 00:03:39.174 SO libspdk_event_scsi.so.6.0 00:03:39.174 LIB libspdk_event_nvmf.a 00:03:39.174 SYMLINK libspdk_event_ublk.so 00:03:39.174 SO libspdk_event_nvmf.so.6.0 00:03:39.174 SYMLINK libspdk_event_nbd.so 00:03:39.174 SYMLINK libspdk_event_scsi.so 00:03:39.174 SYMLINK libspdk_event_nvmf.so 00:03:39.432 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.432 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.691 LIB libspdk_event_vhost_scsi.a 00:03:39.691 LIB libspdk_event_iscsi.a 00:03:39.691 SO libspdk_event_vhost_scsi.so.3.0 00:03:39.691 SO libspdk_event_iscsi.so.6.0 00:03:39.691 SYMLINK libspdk_event_vhost_scsi.so 00:03:39.691 SYMLINK libspdk_event_iscsi.so 00:03:39.949 SO libspdk.so.6.0 00:03:39.949 SYMLINK libspdk.so 00:03:40.208 CC test/rpc_client/rpc_client_test.o 00:03:40.208 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.208 TEST_HEADER include/spdk/accel.h 00:03:40.208 CC app/spdk_lspci/spdk_lspci.o 00:03:40.208 TEST_HEADER include/spdk/assert.h 00:03:40.208 TEST_HEADER include/spdk/barrier.h 00:03:40.208 TEST_HEADER include/spdk/accel_module.h 00:03:40.208 TEST_HEADER include/spdk/base64.h 00:03:40.208 TEST_HEADER include/spdk/bdev.h 00:03:40.208 TEST_HEADER include/spdk/bdev_module.h 00:03:40.208 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.208 TEST_HEADER include/spdk/bit_pool.h 00:03:40.208 TEST_HEADER include/spdk/bit_array.h 00:03:40.208 TEST_HEADER include/spdk/blobfs.h 00:03:40.208 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.208 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.208 TEST_HEADER include/spdk/conf.h 00:03:40.208 TEST_HEADER include/spdk/blob.h 00:03:40.208 TEST_HEADER include/spdk/config.h 00:03:40.208 CC app/trace_record/trace_record.o 00:03:40.208 TEST_HEADER include/spdk/cpuset.h 00:03:40.208 TEST_HEADER include/spdk/crc16.h 00:03:40.208 TEST_HEADER include/spdk/crc32.h 00:03:40.208 TEST_HEADER include/spdk/crc64.h 00:03:40.208 TEST_HEADER include/spdk/dif.h 00:03:40.208 CXX app/trace/trace.o 00:03:40.208 TEST_HEADER include/spdk/dma.h 00:03:40.208 TEST_HEADER include/spdk/endian.h 00:03:40.208 CC app/spdk_top/spdk_top.o 00:03:40.208 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.208 CC app/spdk_nvme_identify/identify.o 00:03:40.473 TEST_HEADER include/spdk/event.h 00:03:40.473 TEST_HEADER include/spdk/env.h 00:03:40.473 TEST_HEADER include/spdk/fd_group.h 00:03:40.473 TEST_HEADER include/spdk/fd.h 00:03:40.473 CC app/spdk_nvme_perf/perf.o 00:03:40.473 TEST_HEADER include/spdk/file.h 00:03:40.473 TEST_HEADER include/spdk/fsdev.h 00:03:40.473 TEST_HEADER include/spdk/fsdev_module.h 00:03:40.473 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:40.473 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.473 TEST_HEADER include/spdk/ftl.h 00:03:40.473 TEST_HEADER include/spdk/hexlify.h 00:03:40.473 TEST_HEADER include/spdk/histogram_data.h 00:03:40.473 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.473 TEST_HEADER include/spdk/idxd.h 00:03:40.473 TEST_HEADER include/spdk/init.h 00:03:40.473 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.473 TEST_HEADER include/spdk/ioat.h 00:03:40.473 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.474 TEST_HEADER include/spdk/json.h 00:03:40.474 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.474 TEST_HEADER include/spdk/keyring.h 00:03:40.474 TEST_HEADER include/spdk/likely.h 00:03:40.474 TEST_HEADER include/spdk/keyring_module.h 00:03:40.474 TEST_HEADER include/spdk/lvol.h 00:03:40.474 TEST_HEADER include/spdk/log.h 00:03:40.474 TEST_HEADER include/spdk/md5.h 00:03:40.474 TEST_HEADER include/spdk/memory.h 00:03:40.474 TEST_HEADER include/spdk/nbd.h 00:03:40.474 TEST_HEADER include/spdk/net.h 00:03:40.474 TEST_HEADER include/spdk/mmio.h 00:03:40.474 TEST_HEADER include/spdk/notify.h 00:03:40.474 TEST_HEADER include/spdk/nvme.h 00:03:40.474 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.474 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.474 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.474 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.474 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.474 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.474 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.474 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.474 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.474 TEST_HEADER include/spdk/opal.h 00:03:40.474 TEST_HEADER include/spdk/nvmf.h 00:03:40.474 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.474 TEST_HEADER include/spdk/opal_spec.h 00:03:40.474 TEST_HEADER include/spdk/pci_ids.h 00:03:40.474 TEST_HEADER include/spdk/queue.h 00:03:40.474 TEST_HEADER include/spdk/pipe.h 00:03:40.474 CC app/nvmf_tgt/nvmf_main.o 00:03:40.474 TEST_HEADER include/spdk/rpc.h 00:03:40.474 TEST_HEADER include/spdk/reduce.h 00:03:40.474 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.474 TEST_HEADER include/spdk/scsi.h 00:03:40.474 TEST_HEADER include/spdk/scheduler.h 00:03:40.474 TEST_HEADER include/spdk/sock.h 00:03:40.474 TEST_HEADER include/spdk/stdinc.h 00:03:40.474 CC app/iscsi_tgt/iscsi_tgt.o 00:03:40.474 TEST_HEADER include/spdk/string.h 00:03:40.474 CC app/spdk_dd/spdk_dd.o 00:03:40.474 TEST_HEADER include/spdk/thread.h 00:03:40.474 TEST_HEADER include/spdk/trace_parser.h 00:03:40.474 TEST_HEADER include/spdk/trace.h 00:03:40.474 TEST_HEADER include/spdk/tree.h 00:03:40.474 TEST_HEADER include/spdk/util.h 00:03:40.474 TEST_HEADER include/spdk/ublk.h 00:03:40.474 TEST_HEADER include/spdk/uuid.h 00:03:40.474 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.474 TEST_HEADER include/spdk/version.h 00:03:40.474 TEST_HEADER include/spdk/vmd.h 00:03:40.474 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.474 TEST_HEADER include/spdk/vhost.h 00:03:40.474 TEST_HEADER include/spdk/zipf.h 00:03:40.474 TEST_HEADER include/spdk/xor.h 00:03:40.474 CXX test/cpp_headers/accel.o 00:03:40.474 CXX test/cpp_headers/accel_module.o 00:03:40.474 CXX test/cpp_headers/barrier.o 00:03:40.474 CXX test/cpp_headers/assert.o 00:03:40.474 CXX test/cpp_headers/bdev.o 00:03:40.474 CC app/spdk_tgt/spdk_tgt.o 00:03:40.474 CXX test/cpp_headers/bdev_module.o 00:03:40.474 CXX test/cpp_headers/base64.o 00:03:40.474 CXX test/cpp_headers/bdev_zone.o 00:03:40.474 CXX test/cpp_headers/bit_array.o 00:03:40.474 CXX test/cpp_headers/blob_bdev.o 00:03:40.474 CXX test/cpp_headers/bit_pool.o 00:03:40.474 CXX test/cpp_headers/blobfs.o 00:03:40.474 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.474 CXX test/cpp_headers/conf.o 00:03:40.474 CXX test/cpp_headers/blob.o 00:03:40.474 CXX test/cpp_headers/cpuset.o 00:03:40.474 CXX test/cpp_headers/crc32.o 00:03:40.474 CXX test/cpp_headers/crc16.o 00:03:40.474 CXX test/cpp_headers/config.o 00:03:40.474 CXX test/cpp_headers/dif.o 00:03:40.474 CXX test/cpp_headers/crc64.o 00:03:40.474 CXX test/cpp_headers/endian.o 00:03:40.474 CXX test/cpp_headers/env_dpdk.o 00:03:40.474 CXX test/cpp_headers/env.o 00:03:40.474 CXX test/cpp_headers/dma.o 00:03:40.474 CXX test/cpp_headers/event.o 00:03:40.474 CXX test/cpp_headers/fd_group.o 00:03:40.474 CXX test/cpp_headers/fd.o 00:03:40.474 CXX test/cpp_headers/file.o 00:03:40.474 CXX test/cpp_headers/fsdev_module.o 00:03:40.474 CXX test/cpp_headers/fsdev.o 00:03:40.474 CXX test/cpp_headers/ftl.o 00:03:40.474 CXX test/cpp_headers/fuse_dispatcher.o 00:03:40.474 CXX test/cpp_headers/gpt_spec.o 00:03:40.474 CXX test/cpp_headers/histogram_data.o 00:03:40.474 CXX test/cpp_headers/hexlify.o 00:03:40.474 CXX test/cpp_headers/idxd_spec.o 00:03:40.474 CXX test/cpp_headers/ioat.o 00:03:40.474 CXX test/cpp_headers/ioat_spec.o 00:03:40.474 CXX test/cpp_headers/idxd.o 00:03:40.474 CXX test/cpp_headers/iscsi_spec.o 00:03:40.474 CXX test/cpp_headers/init.o 00:03:40.474 CXX test/cpp_headers/keyring.o 00:03:40.474 CXX test/cpp_headers/json.o 00:03:40.474 CXX test/cpp_headers/jsonrpc.o 00:03:40.474 CXX test/cpp_headers/keyring_module.o 00:03:40.474 CXX test/cpp_headers/likely.o 00:03:40.474 CXX test/cpp_headers/lvol.o 00:03:40.474 CXX test/cpp_headers/log.o 00:03:40.474 CXX test/cpp_headers/md5.o 00:03:40.474 CXX test/cpp_headers/mmio.o 00:03:40.474 CXX test/cpp_headers/memory.o 00:03:40.474 CXX test/cpp_headers/nbd.o 00:03:40.474 CXX test/cpp_headers/net.o 00:03:40.474 CXX test/cpp_headers/notify.o 00:03:40.474 CXX test/cpp_headers/nvme.o 00:03:40.474 CXX test/cpp_headers/nvme_intel.o 00:03:40.474 CXX test/cpp_headers/nvme_ocssd.o 00:03:40.474 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:40.474 CXX test/cpp_headers/nvme_spec.o 00:03:40.474 CXX test/cpp_headers/nvme_zns.o 00:03:40.474 CXX test/cpp_headers/nvmf_cmd.o 00:03:40.474 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:40.474 CXX test/cpp_headers/nvmf.o 00:03:40.474 CXX test/cpp_headers/nvmf_spec.o 00:03:40.474 CXX test/cpp_headers/nvmf_transport.o 00:03:40.474 CXX test/cpp_headers/opal.o 00:03:40.474 CC app/fio/nvme/fio_plugin.o 00:03:40.474 CC examples/util/zipf/zipf.o 00:03:40.474 CC test/app/jsoncat/jsoncat.o 00:03:40.474 CC test/thread/poller_perf/poller_perf.o 00:03:40.474 CC test/dma/test_dma/test_dma.o 00:03:40.474 CC test/env/vtophys/vtophys.o 00:03:40.474 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.474 CC test/app/histogram_perf/histogram_perf.o 00:03:40.474 CC test/env/memory/memory_ut.o 00:03:40.474 CXX test/cpp_headers/opal_spec.o 00:03:40.474 CC test/app/stub/stub.o 00:03:40.474 CC examples/ioat/verify/verify.o 00:03:40.474 CC test/env/pci/pci_ut.o 00:03:40.474 CC examples/ioat/perf/perf.o 00:03:40.743 CC app/fio/bdev/fio_plugin.o 00:03:40.743 CC test/app/bdev_svc/bdev_svc.o 00:03:40.743 LINK rpc_client_test 00:03:40.743 LINK spdk_lspci 00:03:41.007 LINK interrupt_tgt 00:03:41.007 CC test/env/mem_callbacks/mem_callbacks.o 00:03:41.007 LINK spdk_nvme_discover 00:03:41.007 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:41.007 LINK jsoncat 00:03:41.007 LINK nvmf_tgt 00:03:41.007 LINK iscsi_tgt 00:03:41.007 LINK vtophys 00:03:41.007 LINK poller_perf 00:03:41.007 CXX test/cpp_headers/pci_ids.o 00:03:41.007 CXX test/cpp_headers/pipe.o 00:03:41.007 CXX test/cpp_headers/queue.o 00:03:41.007 CXX test/cpp_headers/rpc.o 00:03:41.007 CXX test/cpp_headers/reduce.o 00:03:41.007 LINK env_dpdk_post_init 00:03:41.007 CXX test/cpp_headers/scheduler.o 00:03:41.007 CXX test/cpp_headers/scsi.o 00:03:41.007 CXX test/cpp_headers/scsi_spec.o 00:03:41.007 CXX test/cpp_headers/sock.o 00:03:41.007 CXX test/cpp_headers/stdinc.o 00:03:41.007 CXX test/cpp_headers/string.o 00:03:41.007 CXX test/cpp_headers/thread.o 00:03:41.007 CXX test/cpp_headers/trace_parser.o 00:03:41.007 CXX test/cpp_headers/tree.o 00:03:41.007 CXX test/cpp_headers/trace.o 00:03:41.007 CXX test/cpp_headers/ublk.o 00:03:41.007 CXX test/cpp_headers/util.o 00:03:41.007 CXX test/cpp_headers/uuid.o 00:03:41.007 CXX test/cpp_headers/version.o 00:03:41.007 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.007 CXX test/cpp_headers/vfio_user_spec.o 00:03:41.007 CXX test/cpp_headers/vhost.o 00:03:41.007 CXX test/cpp_headers/vmd.o 00:03:41.007 CXX test/cpp_headers/xor.o 00:03:41.007 LINK spdk_trace_record 00:03:41.007 CXX test/cpp_headers/zipf.o 00:03:41.007 LINK stub 00:03:41.265 LINK zipf 00:03:41.265 LINK spdk_tgt 00:03:41.265 LINK histogram_perf 00:03:41.265 LINK bdev_svc 00:03:41.265 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:41.265 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.265 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.265 LINK verify 00:03:41.265 LINK ioat_perf 00:03:41.265 LINK spdk_dd 00:03:41.525 LINK spdk_trace 00:03:41.525 LINK spdk_bdev 00:03:41.525 LINK pci_ut 00:03:41.525 LINK spdk_nvme 00:03:41.525 CC test/event/reactor_perf/reactor_perf.o 00:03:41.525 CC test/event/reactor/reactor.o 00:03:41.525 CC test/event/app_repeat/app_repeat.o 00:03:41.525 CC test/event/event_perf/event_perf.o 00:03:41.525 LINK test_dma 00:03:41.525 CC test/event/scheduler/scheduler.o 00:03:41.525 CC examples/idxd/perf/perf.o 00:03:41.525 LINK spdk_nvme_perf 00:03:41.525 CC examples/sock/hello_world/hello_sock.o 00:03:41.525 LINK nvme_fuzz 00:03:41.525 CC examples/vmd/led/led.o 00:03:41.525 LINK spdk_top 00:03:41.525 CC examples/vmd/lsvmd/lsvmd.o 00:03:41.785 LINK mem_callbacks 00:03:41.785 CC examples/thread/thread/thread_ex.o 00:03:41.785 LINK vhost_fuzz 00:03:41.785 LINK reactor_perf 00:03:41.785 LINK event_perf 00:03:41.785 LINK reactor 00:03:41.785 LINK app_repeat 00:03:41.785 LINK spdk_nvme_identify 00:03:41.785 LINK lsvmd 00:03:41.785 LINK led 00:03:41.785 CC app/vhost/vhost.o 00:03:41.785 LINK scheduler 00:03:41.785 LINK hello_sock 00:03:41.785 LINK idxd_perf 00:03:41.785 LINK thread 00:03:42.044 CC test/nvme/e2edp/nvme_dp.o 00:03:42.044 CC test/nvme/err_injection/err_injection.o 00:03:42.044 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.044 LINK vhost 00:03:42.044 CC test/nvme/overhead/overhead.o 00:03:42.044 CC test/nvme/simple_copy/simple_copy.o 00:03:42.044 CC test/nvme/cuse/cuse.o 00:03:42.044 CC test/nvme/connect_stress/connect_stress.o 00:03:42.044 CC test/nvme/boot_partition/boot_partition.o 00:03:42.044 CC test/nvme/sgl/sgl.o 00:03:42.044 CC test/nvme/startup/startup.o 00:03:42.044 CC test/nvme/reset/reset.o 00:03:42.044 CC test/nvme/aer/aer.o 00:03:42.044 CC test/nvme/fdp/fdp.o 00:03:42.044 CC test/nvme/reserve/reserve.o 00:03:42.044 CC test/nvme/compliance/nvme_compliance.o 00:03:42.044 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.044 CC test/blobfs/mkfs/mkfs.o 00:03:42.044 CC test/accel/dif/dif.o 00:03:42.044 LINK memory_ut 00:03:42.044 CC test/lvol/esnap/esnap.o 00:03:42.302 LINK boot_partition 00:03:42.302 LINK fused_ordering 00:03:42.302 LINK startup 00:03:42.302 LINK doorbell_aers 00:03:42.302 LINK connect_stress 00:03:42.302 LINK err_injection 00:03:42.302 CC examples/nvme/abort/abort.o 00:03:42.302 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:42.302 CC examples/nvme/hello_world/hello_world.o 00:03:42.302 CC examples/nvme/hotplug/hotplug.o 00:03:42.302 LINK simple_copy 00:03:42.302 LINK reserve 00:03:42.302 CC examples/nvme/arbitration/arbitration.o 00:03:42.302 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.302 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.302 CC examples/nvme/reconnect/reconnect.o 00:03:42.302 LINK nvme_dp 00:03:42.302 LINK overhead 00:03:42.302 LINK sgl 00:03:42.302 LINK reset 00:03:42.302 LINK mkfs 00:03:42.302 LINK aer 00:03:42.302 LINK fdp 00:03:42.302 LINK nvme_compliance 00:03:42.302 CC examples/accel/perf/accel_perf.o 00:03:42.561 CC examples/blob/cli/blobcli.o 00:03:42.561 LINK pmr_persistence 00:03:42.561 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:42.561 CC examples/blob/hello_world/hello_blob.o 00:03:42.561 LINK cmb_copy 00:03:42.561 LINK hello_world 00:03:42.561 LINK hotplug 00:03:42.561 LINK reconnect 00:03:42.561 LINK arbitration 00:03:42.561 LINK abort 00:03:42.561 LINK iscsi_fuzz 00:03:42.819 LINK dif 00:03:42.819 LINK nvme_manage 00:03:42.819 LINK hello_fsdev 00:03:42.819 LINK hello_blob 00:03:42.819 LINK accel_perf 00:03:42.819 LINK blobcli 00:03:43.077 LINK cuse 00:03:43.336 CC test/bdev/bdevio/bdevio.o 00:03:43.336 CC examples/bdev/hello_world/hello_bdev.o 00:03:43.336 CC examples/bdev/bdevperf/bdevperf.o 00:03:43.594 LINK hello_bdev 00:03:43.594 LINK bdevio 00:03:43.852 LINK bdevperf 00:03:44.419 CC examples/nvmf/nvmf/nvmf.o 00:03:44.679 LINK nvmf 00:03:46.056 LINK esnap 00:03:46.056 00:03:46.056 real 0m55.645s 00:03:46.056 user 8m16.270s 00:03:46.056 sys 3m40.732s 00:03:46.056 15:20:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:46.056 15:20:51 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.056 ************************************ 00:03:46.056 END TEST make 00:03:46.056 ************************************ 00:03:46.056 15:20:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.056 15:20:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.056 15:20:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.056 15:20:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.057 15:20:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.057 15:20:51 -- pm/common@44 -- $ pid=2733906 00:03:46.057 15:20:51 -- pm/common@50 -- $ kill -TERM 2733906 00:03:46.057 15:20:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.057 15:20:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.057 15:20:51 -- pm/common@44 -- $ pid=2733907 00:03:46.057 15:20:51 -- pm/common@50 -- $ kill -TERM 2733907 00:03:46.057 15:20:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.057 15:20:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:46.057 15:20:51 -- pm/common@44 -- $ pid=2733910 00:03:46.057 15:20:51 -- pm/common@50 -- $ kill -TERM 2733910 00:03:46.057 15:20:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.057 15:20:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:46.057 15:20:51 -- pm/common@44 -- $ pid=2733936 00:03:46.057 15:20:51 -- pm/common@50 -- $ sudo -E kill -TERM 2733936 00:03:46.057 15:20:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:46.057 15:20:52 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:46.316 15:20:52 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:46.316 15:20:52 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:46.316 15:20:52 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:46.316 15:20:52 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:46.316 15:20:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.316 15:20:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.316 15:20:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.316 15:20:52 -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.316 15:20:52 -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.316 15:20:52 -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.316 15:20:52 -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.316 15:20:52 -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.316 15:20:52 -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.316 15:20:52 -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.316 15:20:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.316 15:20:52 -- scripts/common.sh@344 -- # case "$op" in 00:03:46.316 15:20:52 -- scripts/common.sh@345 -- # : 1 00:03:46.316 15:20:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.316 15:20:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.316 15:20:52 -- scripts/common.sh@365 -- # decimal 1 00:03:46.316 15:20:52 -- scripts/common.sh@353 -- # local d=1 00:03:46.316 15:20:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.316 15:20:52 -- scripts/common.sh@355 -- # echo 1 00:03:46.316 15:20:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.316 15:20:52 -- scripts/common.sh@366 -- # decimal 2 00:03:46.316 15:20:52 -- scripts/common.sh@353 -- # local d=2 00:03:46.316 15:20:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.316 15:20:52 -- scripts/common.sh@355 -- # echo 2 00:03:46.316 15:20:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.316 15:20:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.316 15:20:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.316 15:20:52 -- scripts/common.sh@368 -- # return 0 00:03:46.316 15:20:52 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.316 15:20:52 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:46.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.316 --rc genhtml_branch_coverage=1 00:03:46.316 --rc genhtml_function_coverage=1 00:03:46.316 --rc genhtml_legend=1 00:03:46.316 --rc geninfo_all_blocks=1 00:03:46.316 --rc geninfo_unexecuted_blocks=1 00:03:46.316 00:03:46.316 ' 00:03:46.316 15:20:52 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:46.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.316 --rc genhtml_branch_coverage=1 00:03:46.316 --rc genhtml_function_coverage=1 00:03:46.316 --rc genhtml_legend=1 00:03:46.316 --rc geninfo_all_blocks=1 00:03:46.316 --rc geninfo_unexecuted_blocks=1 00:03:46.316 00:03:46.316 ' 00:03:46.316 15:20:52 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:46.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.316 --rc genhtml_branch_coverage=1 00:03:46.316 --rc genhtml_function_coverage=1 00:03:46.316 --rc genhtml_legend=1 00:03:46.316 --rc geninfo_all_blocks=1 00:03:46.316 --rc geninfo_unexecuted_blocks=1 00:03:46.316 00:03:46.316 ' 00:03:46.316 15:20:52 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:46.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.316 --rc genhtml_branch_coverage=1 00:03:46.316 --rc genhtml_function_coverage=1 00:03:46.316 --rc genhtml_legend=1 00:03:46.316 --rc geninfo_all_blocks=1 00:03:46.316 --rc geninfo_unexecuted_blocks=1 00:03:46.316 00:03:46.316 ' 00:03:46.316 15:20:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.316 15:20:52 -- nvmf/common.sh@7 -- # uname -s 00:03:46.316 15:20:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.316 15:20:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.316 15:20:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.316 15:20:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.316 15:20:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.316 15:20:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.316 15:20:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.316 15:20:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.317 15:20:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.317 15:20:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.317 15:20:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:46.317 15:20:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:46.317 15:20:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.317 15:20:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.317 15:20:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:46.317 15:20:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.317 15:20:52 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:46.317 15:20:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:46.317 15:20:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.317 15:20:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.317 15:20:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.317 15:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.317 15:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.317 15:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.317 15:20:52 -- paths/export.sh@5 -- # export PATH 00:03:46.317 15:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.317 15:20:52 -- nvmf/common.sh@51 -- # : 0 00:03:46.317 15:20:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:46.317 15:20:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:46.317 15:20:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.317 15:20:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.317 15:20:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.317 15:20:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:46.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:46.317 15:20:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:46.317 15:20:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:46.317 15:20:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:46.317 15:20:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.317 15:20:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.317 15:20:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.317 15:20:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.317 15:20:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:46.317 15:20:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.317 15:20:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:46.317 15:20:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.317 15:20:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.317 15:20:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.317 15:20:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.317 15:20:52 -- spdk/autotest.sh@48 -- # udevadm_pid=2796352 00:03:46.317 15:20:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.317 15:20:52 -- pm/common@17 -- # local monitor 00:03:46.317 15:20:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.317 15:20:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.317 15:20:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.317 15:20:52 -- pm/common@21 -- # date +%s 00:03:46.317 15:20:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.317 15:20:52 -- pm/common@21 -- # date +%s 00:03:46.317 15:20:52 -- pm/common@25 -- # sleep 1 00:03:46.317 15:20:52 -- pm/common@21 -- # date +%s 00:03:46.317 15:20:52 -- pm/common@21 -- # date +%s 00:03:46.317 15:20:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733494852 00:03:46.317 15:20:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733494852 00:03:46.317 15:20:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733494852 00:03:46.317 15:20:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733494852 00:03:46.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733494852_collect-vmstat.pm.log 00:03:46.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733494852_collect-cpu-load.pm.log 00:03:46.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733494852_collect-cpu-temp.pm.log 00:03:46.317 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733494852_collect-bmc-pm.bmc.pm.log 00:03:47.253 15:20:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.253 15:20:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.253 15:20:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:47.253 15:20:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.513 15:20:53 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.513 15:20:53 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:47.513 15:20:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.513 15:20:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:47.513 15:20:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.513 15:20:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.513 15:20:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:47.513 15:20:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:47.513 15:20:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.513 15:20:53 -- common/autotest_common.sh@1457 -- # uname 00:03:47.513 15:20:53 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:47.513 15:20:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.513 15:20:53 -- common/autotest_common.sh@1477 -- # uname 00:03:47.513 15:20:53 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:47.513 15:20:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:47.513 15:20:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:47.513 lcov: LCOV version 1.15 00:03:47.513 15:20:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:59.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.715 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.926 15:21:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:11.926 15:21:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.926 15:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:11.926 15:21:17 -- spdk/autotest.sh@78 -- # rm -f 00:04:11.926 15:21:17 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.239 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:15.239 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:15.239 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:15.239 15:21:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:15.239 15:21:21 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:15.239 15:21:21 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:15.239 15:21:21 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:15.239 15:21:21 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:15.239 15:21:21 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:15.239 15:21:21 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.239 15:21:21 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:15.239 15:21:21 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.239 15:21:21 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:15.239 15:21:21 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:15.239 15:21:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.239 15:21:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.239 15:21:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:15.240 15:21:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.240 15:21:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.240 15:21:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:15.240 15:21:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:15.240 15:21:21 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.240 No valid GPT data, bailing 00:04:15.240 15:21:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.240 15:21:21 -- scripts/common.sh@394 -- # pt= 00:04:15.240 15:21:21 -- scripts/common.sh@395 -- # return 1 00:04:15.240 15:21:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.240 1+0 records in 00:04:15.240 1+0 records out 00:04:15.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434859 s, 241 MB/s 00:04:15.240 15:21:21 -- spdk/autotest.sh@105 -- # sync 00:04:15.240 15:21:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:15.240 15:21:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:15.240 15:21:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.809 15:21:26 -- spdk/autotest.sh@111 -- # uname -s 00:04:21.809 15:21:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:21.809 15:21:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:21.809 15:21:26 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:23.713 Hugepages 00:04:23.713 node hugesize free / total 00:04:23.713 node0 1048576kB 0 / 0 00:04:23.713 node0 2048kB 0 / 0 00:04:23.713 node1 1048576kB 0 / 0 00:04:23.713 node1 2048kB 0 / 0 00:04:23.713 00:04:23.713 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.713 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:23.713 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:23.713 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:23.713 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:23.713 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:23.713 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:23.713 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:23.713 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:23.713 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:23.713 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:23.713 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:23.713 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:23.713 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:23.713 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:23.713 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:23.713 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:23.713 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:23.713 15:21:29 -- spdk/autotest.sh@117 -- # uname -s 00:04:23.713 15:21:29 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:23.713 15:21:29 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:23.713 15:21:29 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.007 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.007 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.945 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.945 15:21:33 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:29.323 15:21:34 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:29.323 15:21:34 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:29.323 15:21:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:29.323 15:21:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:29.323 15:21:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:29.323 15:21:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:29.323 15:21:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.323 15:21:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.323 15:21:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:29.323 15:21:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:29.323 15:21:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:29.323 15:21:35 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.879 Waiting for block devices as requested 00:04:31.879 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.139 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:32.139 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:32.139 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:32.398 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:32.398 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:32.398 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:32.656 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:32.656 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:32.656 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:32.656 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:32.916 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:32.916 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:32.916 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:33.175 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:33.175 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:33.175 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:33.434 15:21:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:33.434 15:21:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:33.434 15:21:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:33.434 15:21:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:33.434 15:21:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:33.434 15:21:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:33.434 15:21:39 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:33.434 15:21:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:33.434 15:21:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:33.434 15:21:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:33.434 15:21:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:33.434 15:21:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:33.434 15:21:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:33.434 15:21:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:33.434 15:21:39 -- common/autotest_common.sh@1543 -- # continue 00:04:33.434 15:21:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:33.434 15:21:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.434 15:21:39 -- common/autotest_common.sh@10 -- # set +x 00:04:33.434 15:21:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:33.434 15:21:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.434 15:21:39 -- common/autotest_common.sh@10 -- # set +x 00:04:33.434 15:21:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.835 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.835 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:37.771 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.771 15:21:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:37.771 15:21:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.771 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.771 15:21:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:37.771 15:21:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:37.771 15:21:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:37.771 15:21:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:37.771 15:21:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:37.771 15:21:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:37.771 15:21:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:38.028 15:21:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:38.028 15:21:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:38.028 15:21:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:38.028 15:21:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.028 15:21:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:38.028 15:21:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:38.028 15:21:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:38.028 15:21:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:38.028 15:21:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:38.028 15:21:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:38.028 15:21:43 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:38.028 15:21:43 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:38.028 15:21:43 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:38.028 15:21:43 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:38.029 15:21:43 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:38.029 15:21:43 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:38.029 15:21:43 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2811089 00:04:38.029 15:21:43 -- common/autotest_common.sh@1585 -- # waitforlisten 2811089 00:04:38.029 15:21:43 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.029 15:21:43 -- common/autotest_common.sh@835 -- # '[' -z 2811089 ']' 00:04:38.029 15:21:43 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.029 15:21:43 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.029 15:21:43 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.029 15:21:43 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.029 15:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:38.029 [2024-12-06 15:21:43.912878] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:04:38.029 [2024-12-06 15:21:43.912930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2811089 ] 00:04:38.029 [2024-12-06 15:21:43.987486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.286 [2024-12-06 15:21:44.031098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.286 15:21:44 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.286 15:21:44 -- common/autotest_common.sh@868 -- # return 0 00:04:38.286 15:21:44 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:38.286 15:21:44 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:38.286 15:21:44 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:41.578 nvme0n1 00:04:41.578 15:21:47 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:41.578 [2024-12-06 15:21:47.424066] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:41.578 request: 00:04:41.578 { 00:04:41.578 "nvme_ctrlr_name": "nvme0", 00:04:41.578 "password": "test", 00:04:41.578 "method": "bdev_nvme_opal_revert", 00:04:41.578 "req_id": 1 00:04:41.578 } 00:04:41.578 Got JSON-RPC error response 00:04:41.578 response: 00:04:41.578 { 00:04:41.578 "code": -32602, 00:04:41.578 "message": "Invalid parameters" 00:04:41.578 } 00:04:41.578 15:21:47 -- common/autotest_common.sh@1591 -- # true 00:04:41.578 15:21:47 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:41.579 15:21:47 -- common/autotest_common.sh@1595 -- # killprocess 2811089 00:04:41.579 15:21:47 -- common/autotest_common.sh@954 -- # '[' -z 2811089 ']' 00:04:41.579 15:21:47 -- common/autotest_common.sh@958 -- # kill -0 2811089 00:04:41.579 15:21:47 -- common/autotest_common.sh@959 -- # uname 00:04:41.579 15:21:47 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.579 15:21:47 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2811089 00:04:41.579 15:21:47 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.579 15:21:47 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.579 15:21:47 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2811089' 00:04:41.579 killing process with pid 2811089 00:04:41.579 15:21:47 -- common/autotest_common.sh@973 -- # kill 2811089 00:04:41.579 15:21:47 -- common/autotest_common.sh@978 -- # wait 2811089 00:04:44.106 15:21:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:44.106 15:21:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:44.106 15:21:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:44.106 15:21:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:44.106 15:21:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:44.106 15:21:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.106 15:21:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.107 15:21:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:44.107 15:21:49 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.107 15:21:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.107 15:21:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.107 15:21:49 -- common/autotest_common.sh@10 -- # set +x 00:04:44.107 ************************************ 00:04:44.107 START TEST env 00:04:44.107 ************************************ 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.107 * Looking for test storage... 00:04:44.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.107 15:21:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.107 15:21:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.107 15:21:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.107 15:21:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.107 15:21:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.107 15:21:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.107 15:21:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.107 15:21:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.107 15:21:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.107 15:21:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.107 15:21:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.107 15:21:49 env -- scripts/common.sh@344 -- # case "$op" in 00:04:44.107 15:21:49 env -- scripts/common.sh@345 -- # : 1 00:04:44.107 15:21:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.107 15:21:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.107 15:21:49 env -- scripts/common.sh@365 -- # decimal 1 00:04:44.107 15:21:49 env -- scripts/common.sh@353 -- # local d=1 00:04:44.107 15:21:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.107 15:21:49 env -- scripts/common.sh@355 -- # echo 1 00:04:44.107 15:21:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.107 15:21:49 env -- scripts/common.sh@366 -- # decimal 2 00:04:44.107 15:21:49 env -- scripts/common.sh@353 -- # local d=2 00:04:44.107 15:21:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.107 15:21:49 env -- scripts/common.sh@355 -- # echo 2 00:04:44.107 15:21:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.107 15:21:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.107 15:21:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.107 15:21:49 env -- scripts/common.sh@368 -- # return 0 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.107 --rc genhtml_branch_coverage=1 00:04:44.107 --rc genhtml_function_coverage=1 00:04:44.107 --rc genhtml_legend=1 00:04:44.107 --rc geninfo_all_blocks=1 00:04:44.107 --rc geninfo_unexecuted_blocks=1 00:04:44.107 00:04:44.107 ' 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.107 --rc genhtml_branch_coverage=1 00:04:44.107 --rc genhtml_function_coverage=1 00:04:44.107 --rc genhtml_legend=1 00:04:44.107 --rc geninfo_all_blocks=1 00:04:44.107 --rc geninfo_unexecuted_blocks=1 00:04:44.107 00:04:44.107 ' 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.107 --rc genhtml_branch_coverage=1 00:04:44.107 --rc genhtml_function_coverage=1 00:04:44.107 --rc genhtml_legend=1 00:04:44.107 --rc geninfo_all_blocks=1 00:04:44.107 --rc geninfo_unexecuted_blocks=1 00:04:44.107 00:04:44.107 ' 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.107 --rc genhtml_branch_coverage=1 00:04:44.107 --rc genhtml_function_coverage=1 00:04:44.107 --rc genhtml_legend=1 00:04:44.107 --rc geninfo_all_blocks=1 00:04:44.107 --rc geninfo_unexecuted_blocks=1 00:04:44.107 00:04:44.107 ' 00:04:44.107 15:21:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.107 15:21:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.107 15:21:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.107 ************************************ 00:04:44.107 START TEST env_memory 00:04:44.107 ************************************ 00:04:44.107 15:21:49 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.107 00:04:44.107 00:04:44.107 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.107 http://cunit.sourceforge.net/ 00:04:44.107 00:04:44.107 00:04:44.107 Suite: memory 00:04:44.107 Test: alloc and free memory map ...[2024-12-06 15:21:49.971334] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:44.107 passed 00:04:44.107 Test: mem map translation ...[2024-12-06 15:21:49.988923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:44.107 [2024-12-06 15:21:49.988938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:44.107 [2024-12-06 15:21:49.988971] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:44.107 [2024-12-06 15:21:49.988976] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:44.107 passed 00:04:44.107 Test: mem map registration ...[2024-12-06 15:21:50.027346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:44.107 [2024-12-06 15:21:50.027372] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:44.107 passed 00:04:44.107 Test: mem map adjacent registrations ...passed 00:04:44.107 00:04:44.107 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.107 suites 1 1 n/a 0 0 00:04:44.107 tests 4 4 4 0 0 00:04:44.107 asserts 152 152 152 0 n/a 00:04:44.107 00:04:44.107 Elapsed time = 0.134 seconds 00:04:44.107 00:04:44.107 real 0m0.147s 00:04:44.107 user 0m0.137s 00:04:44.107 sys 0m0.009s 00:04:44.107 15:21:50 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.107 15:21:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:44.107 ************************************ 00:04:44.107 END TEST env_memory 00:04:44.107 ************************************ 00:04:44.367 15:21:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.367 15:21:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.367 15:21:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.367 15:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.367 ************************************ 00:04:44.367 START TEST env_vtophys 00:04:44.367 ************************************ 00:04:44.367 15:21:50 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.367 EAL: lib.eal log level changed from notice to debug 00:04:44.367 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.367 EAL: Detected lcore 1 as core 1 on socket 0 00:04:44.367 EAL: Detected lcore 2 as core 2 on socket 0 00:04:44.367 EAL: Detected lcore 3 as core 3 on socket 0 00:04:44.367 EAL: Detected lcore 4 as core 4 on socket 0 00:04:44.367 EAL: Detected lcore 5 as core 5 on socket 0 00:04:44.367 EAL: Detected lcore 6 as core 6 on socket 0 00:04:44.367 EAL: Detected lcore 7 as core 8 on socket 0 00:04:44.367 EAL: Detected lcore 8 as core 9 on socket 0 00:04:44.367 EAL: Detected lcore 9 as core 10 on socket 0 00:04:44.367 EAL: Detected lcore 10 as core 11 on socket 0 00:04:44.367 EAL: Detected lcore 11 as core 12 on socket 0 00:04:44.367 EAL: Detected lcore 12 as core 13 on socket 0 00:04:44.367 EAL: Detected lcore 13 as core 16 on socket 0 00:04:44.367 EAL: Detected lcore 14 as core 17 on socket 0 00:04:44.367 EAL: Detected lcore 15 as core 18 on socket 0 00:04:44.367 EAL: Detected lcore 16 as core 19 on socket 0 00:04:44.367 EAL: Detected lcore 17 as core 20 on socket 0 00:04:44.367 EAL: Detected lcore 18 as core 21 on socket 0 00:04:44.367 EAL: Detected lcore 19 as core 25 on socket 0 00:04:44.367 EAL: Detected lcore 20 as core 26 on socket 0 00:04:44.367 EAL: Detected lcore 21 as core 27 on socket 0 00:04:44.367 EAL: Detected lcore 22 as core 28 on socket 0 00:04:44.367 EAL: Detected lcore 23 as core 29 on socket 0 00:04:44.367 EAL: Detected lcore 24 as core 0 on socket 1 00:04:44.367 EAL: Detected lcore 25 as core 1 on socket 1 00:04:44.367 EAL: Detected lcore 26 as core 2 on socket 1 00:04:44.367 EAL: Detected lcore 27 as core 3 on socket 1 00:04:44.367 EAL: Detected lcore 28 as core 4 on socket 1 00:04:44.367 EAL: Detected lcore 29 as core 5 on socket 1 00:04:44.367 EAL: Detected lcore 30 as core 6 on socket 1 00:04:44.367 EAL: Detected lcore 31 as core 8 on socket 1 00:04:44.367 EAL: Detected lcore 32 as core 10 on socket 1 00:04:44.367 EAL: Detected lcore 33 as core 11 on socket 1 00:04:44.367 EAL: Detected lcore 34 as core 12 on socket 1 00:04:44.367 EAL: Detected lcore 35 as core 13 on socket 1 00:04:44.367 EAL: Detected lcore 36 as core 16 on socket 1 00:04:44.367 EAL: Detected lcore 37 as core 17 on socket 1 00:04:44.367 EAL: Detected lcore 38 as core 18 on socket 1 00:04:44.367 EAL: Detected lcore 39 as core 19 on socket 1 00:04:44.367 EAL: Detected lcore 40 as core 20 on socket 1 00:04:44.367 EAL: Detected lcore 41 as core 21 on socket 1 00:04:44.367 EAL: Detected lcore 42 as core 24 on socket 1 00:04:44.367 EAL: Detected lcore 43 as core 25 on socket 1 00:04:44.367 EAL: Detected lcore 44 as core 26 on socket 1 00:04:44.367 EAL: Detected lcore 45 as core 27 on socket 1 00:04:44.367 EAL: Detected lcore 46 as core 28 on socket 1 00:04:44.367 EAL: Detected lcore 47 as core 29 on socket 1 00:04:44.367 EAL: Detected lcore 48 as core 0 on socket 0 00:04:44.367 EAL: Detected lcore 49 as core 1 on socket 0 00:04:44.367 EAL: Detected lcore 50 as core 2 on socket 0 00:04:44.367 EAL: Detected lcore 51 as core 3 on socket 0 00:04:44.367 EAL: Detected lcore 52 as core 4 on socket 0 00:04:44.367 EAL: Detected lcore 53 as core 5 on socket 0 00:04:44.367 EAL: Detected lcore 54 as core 6 on socket 0 00:04:44.367 EAL: Detected lcore 55 as core 8 on socket 0 00:04:44.367 EAL: Detected lcore 56 as core 9 on socket 0 00:04:44.367 EAL: Detected lcore 57 as core 10 on socket 0 00:04:44.367 EAL: Detected lcore 58 as core 11 on socket 0 00:04:44.367 EAL: Detected lcore 59 as core 12 on socket 0 00:04:44.367 EAL: Detected lcore 60 as core 13 on socket 0 00:04:44.367 EAL: Detected lcore 61 as core 16 on socket 0 00:04:44.367 EAL: Detected lcore 62 as core 17 on socket 0 00:04:44.367 EAL: Detected lcore 63 as core 18 on socket 0 00:04:44.367 EAL: Detected lcore 64 as core 19 on socket 0 00:04:44.367 EAL: Detected lcore 65 as core 20 on socket 0 00:04:44.367 EAL: Detected lcore 66 as core 21 on socket 0 00:04:44.367 EAL: Detected lcore 67 as core 25 on socket 0 00:04:44.367 EAL: Detected lcore 68 as core 26 on socket 0 00:04:44.367 EAL: Detected lcore 69 as core 27 on socket 0 00:04:44.367 EAL: Detected lcore 70 as core 28 on socket 0 00:04:44.367 EAL: Detected lcore 71 as core 29 on socket 0 00:04:44.367 EAL: Detected lcore 72 as core 0 on socket 1 00:04:44.367 EAL: Detected lcore 73 as core 1 on socket 1 00:04:44.367 EAL: Detected lcore 74 as core 2 on socket 1 00:04:44.367 EAL: Detected lcore 75 as core 3 on socket 1 00:04:44.367 EAL: Detected lcore 76 as core 4 on socket 1 00:04:44.367 EAL: Detected lcore 77 as core 5 on socket 1 00:04:44.367 EAL: Detected lcore 78 as core 6 on socket 1 00:04:44.367 EAL: Detected lcore 79 as core 8 on socket 1 00:04:44.367 EAL: Detected lcore 80 as core 10 on socket 1 00:04:44.367 EAL: Detected lcore 81 as core 11 on socket 1 00:04:44.367 EAL: Detected lcore 82 as core 12 on socket 1 00:04:44.367 EAL: Detected lcore 83 as core 13 on socket 1 00:04:44.367 EAL: Detected lcore 84 as core 16 on socket 1 00:04:44.367 EAL: Detected lcore 85 as core 17 on socket 1 00:04:44.367 EAL: Detected lcore 86 as core 18 on socket 1 00:04:44.367 EAL: Detected lcore 87 as core 19 on socket 1 00:04:44.367 EAL: Detected lcore 88 as core 20 on socket 1 00:04:44.367 EAL: Detected lcore 89 as core 21 on socket 1 00:04:44.367 EAL: Detected lcore 90 as core 24 on socket 1 00:04:44.367 EAL: Detected lcore 91 as core 25 on socket 1 00:04:44.367 EAL: Detected lcore 92 as core 26 on socket 1 00:04:44.367 EAL: Detected lcore 93 as core 27 on socket 1 00:04:44.367 EAL: Detected lcore 94 as core 28 on socket 1 00:04:44.367 EAL: Detected lcore 95 as core 29 on socket 1 00:04:44.367 EAL: Maximum logical cores by configuration: 128 00:04:44.367 EAL: Detected CPU lcores: 96 00:04:44.367 EAL: Detected NUMA nodes: 2 00:04:44.367 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:44.367 EAL: Detected shared linkage of DPDK 00:04:44.367 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.367 EAL: Bus pci wants IOVA as 'DC' 00:04:44.367 EAL: Buses did not request a specific IOVA mode. 00:04:44.367 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:44.367 EAL: Selected IOVA mode 'VA' 00:04:44.367 EAL: Probing VFIO support... 00:04:44.367 EAL: IOMMU type 1 (Type 1) is supported 00:04:44.367 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:44.367 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:44.367 EAL: VFIO support initialized 00:04:44.367 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.367 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.367 EAL: Setting up physically contiguous memory... 00:04:44.367 EAL: Setting maximum number of open files to 524288 00:04:44.367 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.367 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:44.367 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.367 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.367 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.367 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.367 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.367 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.367 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.367 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.367 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.367 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.367 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.367 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.367 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.367 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:44.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.367 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:44.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.367 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:44.367 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:44.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.367 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:44.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.367 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:44.367 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:44.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.367 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:44.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.367 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:44.367 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:44.368 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.368 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:44.368 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.368 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.368 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:44.368 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:44.368 EAL: Hugepages will be freed exactly as allocated. 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: TSC frequency is ~2100000 KHz 00:04:44.368 EAL: Main lcore 0 is ready (tid=7fbd8fd9da00;cpuset=[0]) 00:04:44.368 EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 0 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:44.368 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.368 00:04:44.368 00:04:44.368 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.368 http://cunit.sourceforge.net/ 00:04:44.368 00:04:44.368 00:04:44.368 Suite: components_suite 00:04:44.368 Test: vtophys_malloc_test ...passed 00:04:44.368 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 4 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.368 EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 4 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.368 EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 4 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.368 EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 4 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.368 EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 4 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.368 EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 4 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.368 EAL: Trying to obtain current memory policy. 00:04:44.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.368 EAL: Restoring previous memory policy: 4 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.368 EAL: request: mp_malloc_sync 00:04:44.368 EAL: No shared files mode enabled, IPC is disabled 00:04:44.368 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.368 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.626 EAL: request: mp_malloc_sync 00:04:44.626 EAL: No shared files mode enabled, IPC is disabled 00:04:44.626 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.626 EAL: Trying to obtain current memory policy. 00:04:44.626 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.626 EAL: Restoring previous memory policy: 4 00:04:44.626 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.626 EAL: request: mp_malloc_sync 00:04:44.626 EAL: No shared files mode enabled, IPC is disabled 00:04:44.626 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.626 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.626 EAL: request: mp_malloc_sync 00:04:44.627 EAL: No shared files mode enabled, IPC is disabled 00:04:44.627 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.627 EAL: Trying to obtain current memory policy. 00:04:44.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.627 EAL: Restoring previous memory policy: 4 00:04:44.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.627 EAL: request: mp_malloc_sync 00:04:44.627 EAL: No shared files mode enabled, IPC is disabled 00:04:44.627 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.885 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.885 EAL: request: mp_malloc_sync 00:04:44.885 EAL: No shared files mode enabled, IPC is disabled 00:04:44.885 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.885 EAL: Trying to obtain current memory policy. 00:04:44.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.144 EAL: Restoring previous memory policy: 4 00:04:45.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.144 EAL: request: mp_malloc_sync 00:04:45.144 EAL: No shared files mode enabled, IPC is disabled 00:04:45.144 EAL: Heap on socket 0 was expanded by 1026MB 00:04:45.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.403 EAL: request: mp_malloc_sync 00:04:45.404 EAL: No shared files mode enabled, IPC is disabled 00:04:45.404 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:45.404 passed 00:04:45.404 00:04:45.404 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.404 suites 1 1 n/a 0 0 00:04:45.404 tests 2 2 2 0 0 00:04:45.404 asserts 497 497 497 0 n/a 00:04:45.404 00:04:45.404 Elapsed time = 0.970 seconds 00:04:45.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.404 EAL: request: mp_malloc_sync 00:04:45.404 EAL: No shared files mode enabled, IPC is disabled 00:04:45.404 EAL: Heap on socket 0 was shrunk by 2MB 00:04:45.404 EAL: No shared files mode enabled, IPC is disabled 00:04:45.404 EAL: No shared files mode enabled, IPC is disabled 00:04:45.404 EAL: No shared files mode enabled, IPC is disabled 00:04:45.404 00:04:45.404 real 0m1.106s 00:04:45.404 user 0m0.642s 00:04:45.404 sys 0m0.433s 00:04:45.404 15:21:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.404 15:21:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:45.404 ************************************ 00:04:45.404 END TEST env_vtophys 00:04:45.404 ************************************ 00:04:45.404 15:21:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:45.404 15:21:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.404 15:21:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.404 15:21:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.404 ************************************ 00:04:45.404 START TEST env_pci 00:04:45.404 ************************************ 00:04:45.404 15:21:51 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:45.404 00:04:45.404 00:04:45.404 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.404 http://cunit.sourceforge.net/ 00:04:45.404 00:04:45.404 00:04:45.404 Suite: pci 00:04:45.404 Test: pci_hook ...[2024-12-06 15:21:51.344771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2812408 has claimed it 00:04:45.404 EAL: Cannot find device (10000:00:01.0) 00:04:45.404 EAL: Failed to attach device on primary process 00:04:45.404 passed 00:04:45.404 00:04:45.404 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.404 suites 1 1 n/a 0 0 00:04:45.404 tests 1 1 1 0 0 00:04:45.404 asserts 25 25 25 0 n/a 00:04:45.404 00:04:45.404 Elapsed time = 0.027 seconds 00:04:45.404 00:04:45.404 real 0m0.046s 00:04:45.404 user 0m0.018s 00:04:45.404 sys 0m0.027s 00:04:45.404 15:21:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.404 15:21:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:45.404 ************************************ 00:04:45.404 END TEST env_pci 00:04:45.404 ************************************ 00:04:45.664 15:21:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.664 15:21:51 env -- env/env.sh@15 -- # uname 00:04:45.664 15:21:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.664 15:21:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.664 15:21:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.664 15:21:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:45.664 15:21:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.664 15:21:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.664 ************************************ 00:04:45.664 START TEST env_dpdk_post_init 00:04:45.664 ************************************ 00:04:45.664 15:21:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.664 EAL: Detected CPU lcores: 96 00:04:45.664 EAL: Detected NUMA nodes: 2 00:04:45.664 EAL: Detected shared linkage of DPDK 00:04:45.664 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.664 EAL: Selected IOVA mode 'VA' 00:04:45.664 EAL: VFIO support initialized 00:04:45.664 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.664 EAL: Using IOMMU type 1 (Type 1) 00:04:45.664 EAL: Ignore mapping IO port bar(1) 00:04:45.664 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:45.664 EAL: Ignore mapping IO port bar(1) 00:04:45.664 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:45.664 EAL: Ignore mapping IO port bar(1) 00:04:45.664 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:45.664 EAL: Ignore mapping IO port bar(1) 00:04:45.664 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:45.664 EAL: Ignore mapping IO port bar(1) 00:04:45.664 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:45.664 EAL: Ignore mapping IO port bar(1) 00:04:45.664 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:45.923 EAL: Ignore mapping IO port bar(1) 00:04:45.924 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:45.924 EAL: Ignore mapping IO port bar(1) 00:04:45.924 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:46.492 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:46.492 EAL: Ignore mapping IO port bar(1) 00:04:46.492 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:46.492 EAL: Ignore mapping IO port bar(1) 00:04:46.492 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:46.492 EAL: Ignore mapping IO port bar(1) 00:04:46.492 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:46.492 EAL: Ignore mapping IO port bar(1) 00:04:46.492 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:46.492 EAL: Ignore mapping IO port bar(1) 00:04:46.492 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:46.750 EAL: Ignore mapping IO port bar(1) 00:04:46.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:46.750 EAL: Ignore mapping IO port bar(1) 00:04:46.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:46.750 EAL: Ignore mapping IO port bar(1) 00:04:46.750 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:50.105 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:50.105 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:50.363 Starting DPDK initialization... 00:04:50.363 Starting SPDK post initialization... 00:04:50.363 SPDK NVMe probe 00:04:50.363 Attaching to 0000:5e:00.0 00:04:50.363 Attached to 0000:5e:00.0 00:04:50.363 Cleaning up... 00:04:50.363 00:04:50.363 real 0m4.890s 00:04:50.363 user 0m3.438s 00:04:50.363 sys 0m0.521s 00:04:50.363 15:21:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.363 15:21:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.363 ************************************ 00:04:50.363 END TEST env_dpdk_post_init 00:04:50.363 ************************************ 00:04:50.622 15:21:56 env -- env/env.sh@26 -- # uname 00:04:50.622 15:21:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:50.622 15:21:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:50.622 15:21:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.622 15:21:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.622 15:21:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.622 ************************************ 00:04:50.622 START TEST env_mem_callbacks 00:04:50.622 ************************************ 00:04:50.622 15:21:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:50.622 EAL: Detected CPU lcores: 96 00:04:50.622 EAL: Detected NUMA nodes: 2 00:04:50.622 EAL: Detected shared linkage of DPDK 00:04:50.623 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.623 EAL: Selected IOVA mode 'VA' 00:04:50.623 EAL: VFIO support initialized 00:04:50.623 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.623 00:04:50.623 00:04:50.623 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.623 http://cunit.sourceforge.net/ 00:04:50.623 00:04:50.623 00:04:50.623 Suite: memory 00:04:50.623 Test: test ... 00:04:50.623 register 0x200000200000 2097152 00:04:50.623 malloc 3145728 00:04:50.623 register 0x200000400000 4194304 00:04:50.623 buf 0x200000500000 len 3145728 PASSED 00:04:50.623 malloc 64 00:04:50.623 buf 0x2000004fff40 len 64 PASSED 00:04:50.623 malloc 4194304 00:04:50.623 register 0x200000800000 6291456 00:04:50.623 buf 0x200000a00000 len 4194304 PASSED 00:04:50.623 free 0x200000500000 3145728 00:04:50.623 free 0x2000004fff40 64 00:04:50.623 unregister 0x200000400000 4194304 PASSED 00:04:50.623 free 0x200000a00000 4194304 00:04:50.623 unregister 0x200000800000 6291456 PASSED 00:04:50.623 malloc 8388608 00:04:50.623 register 0x200000400000 10485760 00:04:50.623 buf 0x200000600000 len 8388608 PASSED 00:04:50.623 free 0x200000600000 8388608 00:04:50.623 unregister 0x200000400000 10485760 PASSED 00:04:50.623 passed 00:04:50.623 00:04:50.623 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.623 suites 1 1 n/a 0 0 00:04:50.623 tests 1 1 1 0 0 00:04:50.623 asserts 15 15 15 0 n/a 00:04:50.623 00:04:50.623 Elapsed time = 0.008 seconds 00:04:50.623 00:04:50.623 real 0m0.056s 00:04:50.623 user 0m0.022s 00:04:50.623 sys 0m0.034s 00:04:50.623 15:21:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.623 15:21:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:50.623 ************************************ 00:04:50.623 END TEST env_mem_callbacks 00:04:50.623 ************************************ 00:04:50.623 00:04:50.623 real 0m6.783s 00:04:50.623 user 0m4.497s 00:04:50.623 sys 0m1.358s 00:04:50.623 15:21:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.623 15:21:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.623 ************************************ 00:04:50.623 END TEST env 00:04:50.623 ************************************ 00:04:50.623 15:21:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:50.623 15:21:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.623 15:21:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.623 15:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.623 ************************************ 00:04:50.623 START TEST rpc 00:04:50.623 ************************************ 00:04:50.623 15:21:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:50.882 * Looking for test storage... 00:04:50.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.882 15:21:56 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.882 15:21:56 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.882 15:21:56 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.882 15:21:56 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.882 15:21:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.882 15:21:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.882 15:21:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.882 15:21:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.882 15:21:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.882 15:21:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.882 15:21:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.882 15:21:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.882 15:21:56 rpc -- scripts/common.sh@345 -- # : 1 00:04:50.882 15:21:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.882 15:21:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.882 15:21:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.882 15:21:56 rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.882 15:21:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.882 15:21:56 rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.882 15:21:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.882 15:21:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.882 15:21:56 rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.882 15:21:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.882 15:21:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.882 15:21:56 rpc -- scripts/common.sh@368 -- # return 0 00:04:50.882 15:21:56 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.882 15:21:56 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.882 --rc genhtml_branch_coverage=1 00:04:50.882 --rc genhtml_function_coverage=1 00:04:50.882 --rc genhtml_legend=1 00:04:50.882 --rc geninfo_all_blocks=1 00:04:50.883 --rc geninfo_unexecuted_blocks=1 00:04:50.883 00:04:50.883 ' 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.883 --rc genhtml_branch_coverage=1 00:04:50.883 --rc genhtml_function_coverage=1 00:04:50.883 --rc genhtml_legend=1 00:04:50.883 --rc geninfo_all_blocks=1 00:04:50.883 --rc geninfo_unexecuted_blocks=1 00:04:50.883 00:04:50.883 ' 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.883 --rc genhtml_branch_coverage=1 00:04:50.883 --rc genhtml_function_coverage=1 00:04:50.883 --rc genhtml_legend=1 00:04:50.883 --rc geninfo_all_blocks=1 00:04:50.883 --rc geninfo_unexecuted_blocks=1 00:04:50.883 00:04:50.883 ' 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.883 --rc genhtml_branch_coverage=1 00:04:50.883 --rc genhtml_function_coverage=1 00:04:50.883 --rc genhtml_legend=1 00:04:50.883 --rc geninfo_all_blocks=1 00:04:50.883 --rc geninfo_unexecuted_blocks=1 00:04:50.883 00:04:50.883 ' 00:04:50.883 15:21:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2813458 00:04:50.883 15:21:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:50.883 15:21:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.883 15:21:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2813458 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 2813458 ']' 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.883 15:21:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.883 [2024-12-06 15:21:56.802330] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:04:50.883 [2024-12-06 15:21:56.802388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813458 ] 00:04:50.883 [2024-12-06 15:21:56.878274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.141 [2024-12-06 15:21:56.919797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:51.141 [2024-12-06 15:21:56.919833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2813458' to capture a snapshot of events at runtime. 00:04:51.141 [2024-12-06 15:21:56.919841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:51.141 [2024-12-06 15:21:56.919847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:51.141 [2024-12-06 15:21:56.919852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2813458 for offline analysis/debug. 00:04:51.141 [2024-12-06 15:21:56.920425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.141 15:21:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.141 15:21:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.141 15:21:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.141 15:21:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.141 15:21:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:51.141 15:21:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:51.141 15:21:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.141 15:21:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.141 15:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.400 ************************************ 00:04:51.400 START TEST rpc_integrity 00:04:51.400 ************************************ 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.400 { 00:04:51.400 "name": "Malloc0", 00:04:51.400 "aliases": [ 00:04:51.400 "5e7f7d56-3b0b-411a-a7ec-888a498777cc" 00:04:51.400 ], 00:04:51.400 "product_name": "Malloc disk", 00:04:51.400 "block_size": 512, 00:04:51.400 "num_blocks": 16384, 00:04:51.400 "uuid": "5e7f7d56-3b0b-411a-a7ec-888a498777cc", 00:04:51.400 "assigned_rate_limits": { 00:04:51.400 "rw_ios_per_sec": 0, 00:04:51.400 "rw_mbytes_per_sec": 0, 00:04:51.400 "r_mbytes_per_sec": 0, 00:04:51.400 "w_mbytes_per_sec": 0 00:04:51.400 }, 00:04:51.400 "claimed": false, 00:04:51.400 "zoned": false, 00:04:51.400 "supported_io_types": { 00:04:51.400 "read": true, 00:04:51.400 "write": true, 00:04:51.400 "unmap": true, 00:04:51.400 "flush": true, 00:04:51.400 "reset": true, 00:04:51.400 "nvme_admin": false, 00:04:51.400 "nvme_io": false, 00:04:51.400 "nvme_io_md": false, 00:04:51.400 "write_zeroes": true, 00:04:51.400 "zcopy": true, 00:04:51.400 "get_zone_info": false, 00:04:51.400 "zone_management": false, 00:04:51.400 "zone_append": false, 00:04:51.400 "compare": false, 00:04:51.400 "compare_and_write": false, 00:04:51.400 "abort": true, 00:04:51.400 "seek_hole": false, 00:04:51.400 "seek_data": false, 00:04:51.400 "copy": true, 00:04:51.400 "nvme_iov_md": false 00:04:51.400 }, 00:04:51.400 "memory_domains": [ 00:04:51.400 { 00:04:51.400 "dma_device_id": "system", 00:04:51.400 "dma_device_type": 1 00:04:51.400 }, 00:04:51.400 { 00:04:51.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.400 "dma_device_type": 2 00:04:51.400 } 00:04:51.400 ], 00:04:51.400 "driver_specific": {} 00:04:51.400 } 00:04:51.400 ]' 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.400 [2024-12-06 15:21:57.296677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:51.400 [2024-12-06 15:21:57.296704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.400 [2024-12-06 15:21:57.296716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24ad100 00:04:51.400 [2024-12-06 15:21:57.296722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.400 [2024-12-06 15:21:57.297796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.400 [2024-12-06 15:21:57.297816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.400 Passthru0 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.400 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.400 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.400 { 00:04:51.400 "name": "Malloc0", 00:04:51.400 "aliases": [ 00:04:51.400 "5e7f7d56-3b0b-411a-a7ec-888a498777cc" 00:04:51.400 ], 00:04:51.400 "product_name": "Malloc disk", 00:04:51.400 "block_size": 512, 00:04:51.400 "num_blocks": 16384, 00:04:51.400 "uuid": "5e7f7d56-3b0b-411a-a7ec-888a498777cc", 00:04:51.400 "assigned_rate_limits": { 00:04:51.400 "rw_ios_per_sec": 0, 00:04:51.400 "rw_mbytes_per_sec": 0, 00:04:51.400 "r_mbytes_per_sec": 0, 00:04:51.400 "w_mbytes_per_sec": 0 00:04:51.400 }, 00:04:51.400 "claimed": true, 00:04:51.400 "claim_type": "exclusive_write", 00:04:51.400 "zoned": false, 00:04:51.400 "supported_io_types": { 00:04:51.400 "read": true, 00:04:51.400 "write": true, 00:04:51.400 "unmap": true, 00:04:51.400 "flush": true, 00:04:51.400 "reset": true, 00:04:51.400 "nvme_admin": false, 00:04:51.400 "nvme_io": false, 00:04:51.400 "nvme_io_md": false, 00:04:51.400 "write_zeroes": true, 00:04:51.400 "zcopy": true, 00:04:51.400 "get_zone_info": false, 00:04:51.400 "zone_management": false, 00:04:51.400 "zone_append": false, 00:04:51.400 "compare": false, 00:04:51.400 "compare_and_write": false, 00:04:51.400 "abort": true, 00:04:51.400 "seek_hole": false, 00:04:51.400 "seek_data": false, 00:04:51.400 "copy": true, 00:04:51.400 "nvme_iov_md": false 00:04:51.400 }, 00:04:51.400 "memory_domains": [ 00:04:51.400 { 00:04:51.400 "dma_device_id": "system", 00:04:51.401 "dma_device_type": 1 00:04:51.401 }, 00:04:51.401 { 00:04:51.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.401 "dma_device_type": 2 00:04:51.401 } 00:04:51.401 ], 00:04:51.401 "driver_specific": {} 00:04:51.401 }, 00:04:51.401 { 00:04:51.401 "name": "Passthru0", 00:04:51.401 "aliases": [ 00:04:51.401 "3cb9ca0b-e1a4-5940-b2e3-7da66ec7fda9" 00:04:51.401 ], 00:04:51.401 "product_name": "passthru", 00:04:51.401 "block_size": 512, 00:04:51.401 "num_blocks": 16384, 00:04:51.401 "uuid": "3cb9ca0b-e1a4-5940-b2e3-7da66ec7fda9", 00:04:51.401 "assigned_rate_limits": { 00:04:51.401 "rw_ios_per_sec": 0, 00:04:51.401 "rw_mbytes_per_sec": 0, 00:04:51.401 "r_mbytes_per_sec": 0, 00:04:51.401 "w_mbytes_per_sec": 0 00:04:51.401 }, 00:04:51.401 "claimed": false, 00:04:51.401 "zoned": false, 00:04:51.401 "supported_io_types": { 00:04:51.401 "read": true, 00:04:51.401 "write": true, 00:04:51.401 "unmap": true, 00:04:51.401 "flush": true, 00:04:51.401 "reset": true, 00:04:51.401 "nvme_admin": false, 00:04:51.401 "nvme_io": false, 00:04:51.401 "nvme_io_md": false, 00:04:51.401 "write_zeroes": true, 00:04:51.401 "zcopy": true, 00:04:51.401 "get_zone_info": false, 00:04:51.401 "zone_management": false, 00:04:51.401 "zone_append": false, 00:04:51.401 "compare": false, 00:04:51.401 "compare_and_write": false, 00:04:51.401 "abort": true, 00:04:51.401 "seek_hole": false, 00:04:51.401 "seek_data": false, 00:04:51.401 "copy": true, 00:04:51.401 "nvme_iov_md": false 00:04:51.401 }, 00:04:51.401 "memory_domains": [ 00:04:51.401 { 00:04:51.401 "dma_device_id": "system", 00:04:51.401 "dma_device_type": 1 00:04:51.401 }, 00:04:51.401 { 00:04:51.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.401 "dma_device_type": 2 00:04:51.401 } 00:04:51.401 ], 00:04:51.401 "driver_specific": { 00:04:51.401 "passthru": { 00:04:51.401 "name": "Passthru0", 00:04:51.401 "base_bdev_name": "Malloc0" 00:04:51.401 } 00:04:51.401 } 00:04:51.401 } 00:04:51.401 ]' 00:04:51.401 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.401 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.401 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.401 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.401 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.401 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.660 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.660 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.660 15:21:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.660 00:04:51.660 real 0m0.281s 00:04:51.660 user 0m0.167s 00:04:51.660 sys 0m0.050s 00:04:51.660 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.660 15:21:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 ************************************ 00:04:51.660 END TEST rpc_integrity 00:04:51.660 ************************************ 00:04:51.660 15:21:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:51.660 15:21:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.660 15:21:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.660 15:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 ************************************ 00:04:51.660 START TEST rpc_plugins 00:04:51.660 ************************************ 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.660 { 00:04:51.660 "name": "Malloc1", 00:04:51.660 "aliases": [ 00:04:51.660 "8970c19a-e0f0-492c-8f71-1b6f482369f4" 00:04:51.660 ], 00:04:51.660 "product_name": "Malloc disk", 00:04:51.660 "block_size": 4096, 00:04:51.660 "num_blocks": 256, 00:04:51.660 "uuid": "8970c19a-e0f0-492c-8f71-1b6f482369f4", 00:04:51.660 "assigned_rate_limits": { 00:04:51.660 "rw_ios_per_sec": 0, 00:04:51.660 "rw_mbytes_per_sec": 0, 00:04:51.660 "r_mbytes_per_sec": 0, 00:04:51.660 "w_mbytes_per_sec": 0 00:04:51.660 }, 00:04:51.660 "claimed": false, 00:04:51.660 "zoned": false, 00:04:51.660 "supported_io_types": { 00:04:51.660 "read": true, 00:04:51.660 "write": true, 00:04:51.660 "unmap": true, 00:04:51.660 "flush": true, 00:04:51.660 "reset": true, 00:04:51.660 "nvme_admin": false, 00:04:51.660 "nvme_io": false, 00:04:51.660 "nvme_io_md": false, 00:04:51.660 "write_zeroes": true, 00:04:51.660 "zcopy": true, 00:04:51.660 "get_zone_info": false, 00:04:51.660 "zone_management": false, 00:04:51.660 "zone_append": false, 00:04:51.660 "compare": false, 00:04:51.660 "compare_and_write": false, 00:04:51.660 "abort": true, 00:04:51.660 "seek_hole": false, 00:04:51.660 "seek_data": false, 00:04:51.660 "copy": true, 00:04:51.660 "nvme_iov_md": false 00:04:51.660 }, 00:04:51.660 "memory_domains": [ 00:04:51.660 { 00:04:51.660 "dma_device_id": "system", 00:04:51.660 "dma_device_type": 1 00:04:51.660 }, 00:04:51.660 { 00:04:51.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.660 "dma_device_type": 2 00:04:51.660 } 00:04:51.660 ], 00:04:51.660 "driver_specific": {} 00:04:51.660 } 00:04:51.660 ]' 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:51.660 15:21:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.660 00:04:51.660 real 0m0.135s 00:04:51.660 user 0m0.083s 00:04:51.660 sys 0m0.018s 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.660 15:21:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.660 ************************************ 00:04:51.660 END TEST rpc_plugins 00:04:51.660 ************************************ 00:04:51.919 15:21:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.919 15:21:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.919 15:21:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.919 15:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.919 ************************************ 00:04:51.919 START TEST rpc_trace_cmd_test 00:04:51.919 ************************************ 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:51.919 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2813458", 00:04:51.919 "tpoint_group_mask": "0x8", 00:04:51.919 "iscsi_conn": { 00:04:51.919 "mask": "0x2", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "scsi": { 00:04:51.919 "mask": "0x4", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "bdev": { 00:04:51.919 "mask": "0x8", 00:04:51.919 "tpoint_mask": "0xffffffffffffffff" 00:04:51.919 }, 00:04:51.919 "nvmf_rdma": { 00:04:51.919 "mask": "0x10", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "nvmf_tcp": { 00:04:51.919 "mask": "0x20", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "ftl": { 00:04:51.919 "mask": "0x40", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "blobfs": { 00:04:51.919 "mask": "0x80", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "dsa": { 00:04:51.919 "mask": "0x200", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "thread": { 00:04:51.919 "mask": "0x400", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "nvme_pcie": { 00:04:51.919 "mask": "0x800", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "iaa": { 00:04:51.919 "mask": "0x1000", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "nvme_tcp": { 00:04:51.919 "mask": "0x2000", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "bdev_nvme": { 00:04:51.919 "mask": "0x4000", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "sock": { 00:04:51.919 "mask": "0x8000", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "blob": { 00:04:51.919 "mask": "0x10000", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "bdev_raid": { 00:04:51.919 "mask": "0x20000", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 }, 00:04:51.919 "scheduler": { 00:04:51.919 "mask": "0x40000", 00:04:51.919 "tpoint_mask": "0x0" 00:04:51.919 } 00:04:51.919 }' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.919 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:52.178 15:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:52.178 00:04:52.178 real 0m0.229s 00:04:52.178 user 0m0.196s 00:04:52.178 sys 0m0.025s 00:04:52.178 15:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.178 15:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.178 ************************************ 00:04:52.178 END TEST rpc_trace_cmd_test 00:04:52.178 ************************************ 00:04:52.178 15:21:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:52.178 15:21:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:52.178 15:21:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:52.178 15:21:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.178 15:21:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.178 15:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.178 ************************************ 00:04:52.178 START TEST rpc_daemon_integrity 00:04:52.178 ************************************ 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.178 { 00:04:52.178 "name": "Malloc2", 00:04:52.178 "aliases": [ 00:04:52.178 "ae7fe18a-1b5d-42ae-b03d-a0389fbd6835" 00:04:52.178 ], 00:04:52.178 "product_name": "Malloc disk", 00:04:52.178 "block_size": 512, 00:04:52.178 "num_blocks": 16384, 00:04:52.178 "uuid": "ae7fe18a-1b5d-42ae-b03d-a0389fbd6835", 00:04:52.178 "assigned_rate_limits": { 00:04:52.178 "rw_ios_per_sec": 0, 00:04:52.178 "rw_mbytes_per_sec": 0, 00:04:52.178 "r_mbytes_per_sec": 0, 00:04:52.178 "w_mbytes_per_sec": 0 00:04:52.178 }, 00:04:52.178 "claimed": false, 00:04:52.178 "zoned": false, 00:04:52.178 "supported_io_types": { 00:04:52.178 "read": true, 00:04:52.178 "write": true, 00:04:52.178 "unmap": true, 00:04:52.178 "flush": true, 00:04:52.178 "reset": true, 00:04:52.178 "nvme_admin": false, 00:04:52.178 "nvme_io": false, 00:04:52.178 "nvme_io_md": false, 00:04:52.178 "write_zeroes": true, 00:04:52.178 "zcopy": true, 00:04:52.178 "get_zone_info": false, 00:04:52.178 "zone_management": false, 00:04:52.178 "zone_append": false, 00:04:52.178 "compare": false, 00:04:52.178 "compare_and_write": false, 00:04:52.178 "abort": true, 00:04:52.178 "seek_hole": false, 00:04:52.178 "seek_data": false, 00:04:52.178 "copy": true, 00:04:52.178 "nvme_iov_md": false 00:04:52.178 }, 00:04:52.178 "memory_domains": [ 00:04:52.178 { 00:04:52.178 "dma_device_id": "system", 00:04:52.178 "dma_device_type": 1 00:04:52.178 }, 00:04:52.178 { 00:04:52.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.178 "dma_device_type": 2 00:04:52.178 } 00:04:52.178 ], 00:04:52.178 "driver_specific": {} 00:04:52.178 } 00:04:52.178 ]' 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.178 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.178 [2024-12-06 15:21:58.138954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:52.178 [2024-12-06 15:21:58.138980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.178 [2024-12-06 15:21:58.138991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x236b450 00:04:52.178 [2024-12-06 15:21:58.138998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.178 [2024-12-06 15:21:58.139960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.179 [2024-12-06 15:21:58.139979] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.179 Passthru0 00:04:52.179 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.179 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.179 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.179 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.179 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.179 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.179 { 00:04:52.179 "name": "Malloc2", 00:04:52.179 "aliases": [ 00:04:52.179 "ae7fe18a-1b5d-42ae-b03d-a0389fbd6835" 00:04:52.179 ], 00:04:52.179 "product_name": "Malloc disk", 00:04:52.179 "block_size": 512, 00:04:52.179 "num_blocks": 16384, 00:04:52.179 "uuid": "ae7fe18a-1b5d-42ae-b03d-a0389fbd6835", 00:04:52.179 "assigned_rate_limits": { 00:04:52.179 "rw_ios_per_sec": 0, 00:04:52.179 "rw_mbytes_per_sec": 0, 00:04:52.179 "r_mbytes_per_sec": 0, 00:04:52.179 "w_mbytes_per_sec": 0 00:04:52.179 }, 00:04:52.179 "claimed": true, 00:04:52.179 "claim_type": "exclusive_write", 00:04:52.179 "zoned": false, 00:04:52.179 "supported_io_types": { 00:04:52.179 "read": true, 00:04:52.179 "write": true, 00:04:52.179 "unmap": true, 00:04:52.179 "flush": true, 00:04:52.179 "reset": true, 00:04:52.179 "nvme_admin": false, 00:04:52.179 "nvme_io": false, 00:04:52.179 "nvme_io_md": false, 00:04:52.179 "write_zeroes": true, 00:04:52.179 "zcopy": true, 00:04:52.179 "get_zone_info": false, 00:04:52.179 "zone_management": false, 00:04:52.179 "zone_append": false, 00:04:52.179 "compare": false, 00:04:52.179 "compare_and_write": false, 00:04:52.179 "abort": true, 00:04:52.179 "seek_hole": false, 00:04:52.179 "seek_data": false, 00:04:52.179 "copy": true, 00:04:52.179 "nvme_iov_md": false 00:04:52.179 }, 00:04:52.179 "memory_domains": [ 00:04:52.179 { 00:04:52.179 "dma_device_id": "system", 00:04:52.179 "dma_device_type": 1 00:04:52.179 }, 00:04:52.179 { 00:04:52.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.179 "dma_device_type": 2 00:04:52.179 } 00:04:52.179 ], 00:04:52.179 "driver_specific": {} 00:04:52.179 }, 00:04:52.179 { 00:04:52.179 "name": "Passthru0", 00:04:52.179 "aliases": [ 00:04:52.179 "ef0a07e5-2a2d-53f2-97d9-2b1f9482fa66" 00:04:52.179 ], 00:04:52.179 "product_name": "passthru", 00:04:52.179 "block_size": 512, 00:04:52.179 "num_blocks": 16384, 00:04:52.179 "uuid": "ef0a07e5-2a2d-53f2-97d9-2b1f9482fa66", 00:04:52.179 "assigned_rate_limits": { 00:04:52.179 "rw_ios_per_sec": 0, 00:04:52.179 "rw_mbytes_per_sec": 0, 00:04:52.179 "r_mbytes_per_sec": 0, 00:04:52.179 "w_mbytes_per_sec": 0 00:04:52.179 }, 00:04:52.179 "claimed": false, 00:04:52.179 "zoned": false, 00:04:52.179 "supported_io_types": { 00:04:52.179 "read": true, 00:04:52.179 "write": true, 00:04:52.179 "unmap": true, 00:04:52.179 "flush": true, 00:04:52.179 "reset": true, 00:04:52.179 "nvme_admin": false, 00:04:52.179 "nvme_io": false, 00:04:52.179 "nvme_io_md": false, 00:04:52.179 "write_zeroes": true, 00:04:52.179 "zcopy": true, 00:04:52.179 "get_zone_info": false, 00:04:52.179 "zone_management": false, 00:04:52.179 "zone_append": false, 00:04:52.179 "compare": false, 00:04:52.179 "compare_and_write": false, 00:04:52.179 "abort": true, 00:04:52.179 "seek_hole": false, 00:04:52.179 "seek_data": false, 00:04:52.179 "copy": true, 00:04:52.179 "nvme_iov_md": false 00:04:52.179 }, 00:04:52.179 "memory_domains": [ 00:04:52.179 { 00:04:52.179 "dma_device_id": "system", 00:04:52.179 "dma_device_type": 1 00:04:52.179 }, 00:04:52.179 { 00:04:52.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.179 "dma_device_type": 2 00:04:52.179 } 00:04:52.179 ], 00:04:52.179 "driver_specific": { 00:04:52.179 "passthru": { 00:04:52.179 "name": "Passthru0", 00:04:52.179 "base_bdev_name": "Malloc2" 00:04:52.179 } 00:04:52.179 } 00:04:52.179 } 00:04:52.179 ]' 00:04:52.179 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.438 00:04:52.438 real 0m0.280s 00:04:52.438 user 0m0.175s 00:04:52.438 sys 0m0.038s 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.438 15:21:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.438 ************************************ 00:04:52.438 END TEST rpc_daemon_integrity 00:04:52.438 ************************************ 00:04:52.438 15:21:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:52.438 15:21:58 rpc -- rpc/rpc.sh@84 -- # killprocess 2813458 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 2813458 ']' 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@958 -- # kill -0 2813458 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813458 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813458' 00:04:52.438 killing process with pid 2813458 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@973 -- # kill 2813458 00:04:52.438 15:21:58 rpc -- common/autotest_common.sh@978 -- # wait 2813458 00:04:52.696 00:04:52.696 real 0m2.099s 00:04:52.696 user 0m2.694s 00:04:52.696 sys 0m0.683s 00:04:52.696 15:21:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.696 15:21:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.696 ************************************ 00:04:52.696 END TEST rpc 00:04:52.696 ************************************ 00:04:52.955 15:21:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.955 15:21:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.955 15:21:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.955 15:21:58 -- common/autotest_common.sh@10 -- # set +x 00:04:52.955 ************************************ 00:04:52.955 START TEST skip_rpc 00:04:52.955 ************************************ 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.955 * Looking for test storage... 00:04:52.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.955 15:21:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.955 --rc genhtml_branch_coverage=1 00:04:52.955 --rc genhtml_function_coverage=1 00:04:52.955 --rc genhtml_legend=1 00:04:52.955 --rc geninfo_all_blocks=1 00:04:52.955 --rc geninfo_unexecuted_blocks=1 00:04:52.955 00:04:52.955 ' 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.955 --rc genhtml_branch_coverage=1 00:04:52.955 --rc genhtml_function_coverage=1 00:04:52.955 --rc genhtml_legend=1 00:04:52.955 --rc geninfo_all_blocks=1 00:04:52.955 --rc geninfo_unexecuted_blocks=1 00:04:52.955 00:04:52.955 ' 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.955 --rc genhtml_branch_coverage=1 00:04:52.955 --rc genhtml_function_coverage=1 00:04:52.955 --rc genhtml_legend=1 00:04:52.955 --rc geninfo_all_blocks=1 00:04:52.955 --rc geninfo_unexecuted_blocks=1 00:04:52.955 00:04:52.955 ' 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.955 --rc genhtml_branch_coverage=1 00:04:52.955 --rc genhtml_function_coverage=1 00:04:52.955 --rc genhtml_legend=1 00:04:52.955 --rc geninfo_all_blocks=1 00:04:52.955 --rc geninfo_unexecuted_blocks=1 00:04:52.955 00:04:52.955 ' 00:04:52.955 15:21:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.955 15:21:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.955 15:21:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.955 15:21:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.214 ************************************ 00:04:53.214 START TEST skip_rpc 00:04:53.214 ************************************ 00:04:53.214 15:21:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:53.214 15:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2814025 00:04:53.214 15:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.214 15:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:53.214 15:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:53.214 [2024-12-06 15:21:59.014077] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:04:53.214 [2024-12-06 15:21:59.014114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814025 ] 00:04:53.214 [2024-12-06 15:21:59.085688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.214 [2024-12-06 15:21:59.125654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2814025 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2814025 ']' 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2814025 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.496 15:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814025 00:04:58.496 15:22:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.496 15:22:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.496 15:22:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814025' 00:04:58.496 killing process with pid 2814025 00:04:58.496 15:22:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2814025 00:04:58.496 15:22:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2814025 00:04:58.496 00:04:58.496 real 0m5.368s 00:04:58.496 user 0m5.135s 00:04:58.496 sys 0m0.275s 00:04:58.496 15:22:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.496 15:22:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.496 ************************************ 00:04:58.496 END TEST skip_rpc 00:04:58.496 ************************************ 00:04:58.496 15:22:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:58.496 15:22:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.496 15:22:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.496 15:22:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.496 ************************************ 00:04:58.496 START TEST skip_rpc_with_json 00:04:58.496 ************************************ 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2814948 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2814948 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2814948 ']' 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.496 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.496 [2024-12-06 15:22:04.452432] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:04:58.496 [2024-12-06 15:22:04.452474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814948 ] 00:04:58.754 [2024-12-06 15:22:04.528198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.754 [2024-12-06 15:22:04.570063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.013 [2024-12-06 15:22:04.789522] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:59.013 request: 00:04:59.013 { 00:04:59.013 "trtype": "tcp", 00:04:59.013 "method": "nvmf_get_transports", 00:04:59.013 "req_id": 1 00:04:59.013 } 00:04:59.013 Got JSON-RPC error response 00:04:59.013 response: 00:04:59.013 { 00:04:59.013 "code": -19, 00:04:59.013 "message": "No such device" 00:04:59.013 } 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.013 [2024-12-06 15:22:04.801629] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.013 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.013 { 00:04:59.013 "subsystems": [ 00:04:59.013 { 00:04:59.013 "subsystem": "fsdev", 00:04:59.013 "config": [ 00:04:59.013 { 00:04:59.013 "method": "fsdev_set_opts", 00:04:59.013 "params": { 00:04:59.013 "fsdev_io_pool_size": 65535, 00:04:59.013 "fsdev_io_cache_size": 256 00:04:59.013 } 00:04:59.013 } 00:04:59.013 ] 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "subsystem": "vfio_user_target", 00:04:59.013 "config": null 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "subsystem": "keyring", 00:04:59.013 "config": [] 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "subsystem": "iobuf", 00:04:59.013 "config": [ 00:04:59.013 { 00:04:59.013 "method": "iobuf_set_options", 00:04:59.013 "params": { 00:04:59.013 "small_pool_count": 8192, 00:04:59.013 "large_pool_count": 1024, 00:04:59.013 "small_bufsize": 8192, 00:04:59.013 "large_bufsize": 135168, 00:04:59.013 "enable_numa": false 00:04:59.013 } 00:04:59.013 } 00:04:59.013 ] 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "subsystem": "sock", 00:04:59.013 "config": [ 00:04:59.013 { 00:04:59.013 "method": "sock_set_default_impl", 00:04:59.013 "params": { 00:04:59.013 "impl_name": "posix" 00:04:59.013 } 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "method": "sock_impl_set_options", 00:04:59.013 "params": { 00:04:59.013 "impl_name": "ssl", 00:04:59.013 "recv_buf_size": 4096, 00:04:59.013 "send_buf_size": 4096, 00:04:59.013 "enable_recv_pipe": true, 00:04:59.013 "enable_quickack": false, 00:04:59.013 "enable_placement_id": 0, 00:04:59.013 "enable_zerocopy_send_server": true, 00:04:59.013 "enable_zerocopy_send_client": false, 00:04:59.013 "zerocopy_threshold": 0, 00:04:59.013 "tls_version": 0, 00:04:59.013 "enable_ktls": false 00:04:59.013 } 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "method": "sock_impl_set_options", 00:04:59.013 "params": { 00:04:59.013 "impl_name": "posix", 00:04:59.013 "recv_buf_size": 2097152, 00:04:59.013 "send_buf_size": 2097152, 00:04:59.013 "enable_recv_pipe": true, 00:04:59.013 "enable_quickack": false, 00:04:59.013 "enable_placement_id": 0, 00:04:59.013 "enable_zerocopy_send_server": true, 00:04:59.013 "enable_zerocopy_send_client": false, 00:04:59.013 "zerocopy_threshold": 0, 00:04:59.013 "tls_version": 0, 00:04:59.013 "enable_ktls": false 00:04:59.013 } 00:04:59.013 } 00:04:59.013 ] 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "subsystem": "vmd", 00:04:59.013 "config": [] 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "subsystem": "accel", 00:04:59.013 "config": [ 00:04:59.013 { 00:04:59.013 "method": "accel_set_options", 00:04:59.013 "params": { 00:04:59.013 "small_cache_size": 128, 00:04:59.013 "large_cache_size": 16, 00:04:59.013 "task_count": 2048, 00:04:59.013 "sequence_count": 2048, 00:04:59.013 "buf_count": 2048 00:04:59.013 } 00:04:59.013 } 00:04:59.013 ] 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "subsystem": "bdev", 00:04:59.013 "config": [ 00:04:59.013 { 00:04:59.013 "method": "bdev_set_options", 00:04:59.013 "params": { 00:04:59.013 "bdev_io_pool_size": 65535, 00:04:59.013 "bdev_io_cache_size": 256, 00:04:59.013 "bdev_auto_examine": true, 00:04:59.013 "iobuf_small_cache_size": 128, 00:04:59.013 "iobuf_large_cache_size": 16 00:04:59.013 } 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "method": "bdev_raid_set_options", 00:04:59.013 "params": { 00:04:59.013 "process_window_size_kb": 1024, 00:04:59.013 "process_max_bandwidth_mb_sec": 0 00:04:59.013 } 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "method": "bdev_iscsi_set_options", 00:04:59.013 "params": { 00:04:59.013 "timeout_sec": 30 00:04:59.013 } 00:04:59.013 }, 00:04:59.013 { 00:04:59.013 "method": "bdev_nvme_set_options", 00:04:59.013 "params": { 00:04:59.013 "action_on_timeout": "none", 00:04:59.014 "timeout_us": 0, 00:04:59.014 "timeout_admin_us": 0, 00:04:59.014 "keep_alive_timeout_ms": 10000, 00:04:59.014 "arbitration_burst": 0, 00:04:59.014 "low_priority_weight": 0, 00:04:59.014 "medium_priority_weight": 0, 00:04:59.014 "high_priority_weight": 0, 00:04:59.014 "nvme_adminq_poll_period_us": 10000, 00:04:59.014 "nvme_ioq_poll_period_us": 0, 00:04:59.014 "io_queue_requests": 0, 00:04:59.014 "delay_cmd_submit": true, 00:04:59.014 "transport_retry_count": 4, 00:04:59.014 "bdev_retry_count": 3, 00:04:59.014 "transport_ack_timeout": 0, 00:04:59.014 "ctrlr_loss_timeout_sec": 0, 00:04:59.014 "reconnect_delay_sec": 0, 00:04:59.014 "fast_io_fail_timeout_sec": 0, 00:04:59.014 "disable_auto_failback": false, 00:04:59.014 "generate_uuids": false, 00:04:59.014 "transport_tos": 0, 00:04:59.014 "nvme_error_stat": false, 00:04:59.014 "rdma_srq_size": 0, 00:04:59.014 "io_path_stat": false, 00:04:59.014 "allow_accel_sequence": false, 00:04:59.014 "rdma_max_cq_size": 0, 00:04:59.014 "rdma_cm_event_timeout_ms": 0, 00:04:59.014 "dhchap_digests": [ 00:04:59.014 "sha256", 00:04:59.014 "sha384", 00:04:59.014 "sha512" 00:04:59.014 ], 00:04:59.014 "dhchap_dhgroups": [ 00:04:59.014 "null", 00:04:59.014 "ffdhe2048", 00:04:59.014 "ffdhe3072", 00:04:59.014 "ffdhe4096", 00:04:59.014 "ffdhe6144", 00:04:59.014 "ffdhe8192" 00:04:59.014 ] 00:04:59.014 } 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "method": "bdev_nvme_set_hotplug", 00:04:59.014 "params": { 00:04:59.014 "period_us": 100000, 00:04:59.014 "enable": false 00:04:59.014 } 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "method": "bdev_wait_for_examine" 00:04:59.014 } 00:04:59.014 ] 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "scsi", 00:04:59.014 "config": null 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "scheduler", 00:04:59.014 "config": [ 00:04:59.014 { 00:04:59.014 "method": "framework_set_scheduler", 00:04:59.014 "params": { 00:04:59.014 "name": "static" 00:04:59.014 } 00:04:59.014 } 00:04:59.014 ] 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "vhost_scsi", 00:04:59.014 "config": [] 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "vhost_blk", 00:04:59.014 "config": [] 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "ublk", 00:04:59.014 "config": [] 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "nbd", 00:04:59.014 "config": [] 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "nvmf", 00:04:59.014 "config": [ 00:04:59.014 { 00:04:59.014 "method": "nvmf_set_config", 00:04:59.014 "params": { 00:04:59.014 "discovery_filter": "match_any", 00:04:59.014 "admin_cmd_passthru": { 00:04:59.014 "identify_ctrlr": false 00:04:59.014 }, 00:04:59.014 "dhchap_digests": [ 00:04:59.014 "sha256", 00:04:59.014 "sha384", 00:04:59.014 "sha512" 00:04:59.014 ], 00:04:59.014 "dhchap_dhgroups": [ 00:04:59.014 "null", 00:04:59.014 "ffdhe2048", 00:04:59.014 "ffdhe3072", 00:04:59.014 "ffdhe4096", 00:04:59.014 "ffdhe6144", 00:04:59.014 "ffdhe8192" 00:04:59.014 ] 00:04:59.014 } 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "method": "nvmf_set_max_subsystems", 00:04:59.014 "params": { 00:04:59.014 "max_subsystems": 1024 00:04:59.014 } 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "method": "nvmf_set_crdt", 00:04:59.014 "params": { 00:04:59.014 "crdt1": 0, 00:04:59.014 "crdt2": 0, 00:04:59.014 "crdt3": 0 00:04:59.014 } 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "method": "nvmf_create_transport", 00:04:59.014 "params": { 00:04:59.014 "trtype": "TCP", 00:04:59.014 "max_queue_depth": 128, 00:04:59.014 "max_io_qpairs_per_ctrlr": 127, 00:04:59.014 "in_capsule_data_size": 4096, 00:04:59.014 "max_io_size": 131072, 00:04:59.014 "io_unit_size": 131072, 00:04:59.014 "max_aq_depth": 128, 00:04:59.014 "num_shared_buffers": 511, 00:04:59.014 "buf_cache_size": 4294967295, 00:04:59.014 "dif_insert_or_strip": false, 00:04:59.014 "zcopy": false, 00:04:59.014 "c2h_success": true, 00:04:59.014 "sock_priority": 0, 00:04:59.014 "abort_timeout_sec": 1, 00:04:59.014 "ack_timeout": 0, 00:04:59.014 "data_wr_pool_size": 0 00:04:59.014 } 00:04:59.014 } 00:04:59.014 ] 00:04:59.014 }, 00:04:59.014 { 00:04:59.014 "subsystem": "iscsi", 00:04:59.014 "config": [ 00:04:59.014 { 00:04:59.014 "method": "iscsi_set_options", 00:04:59.014 "params": { 00:04:59.014 "node_base": "iqn.2016-06.io.spdk", 00:04:59.014 "max_sessions": 128, 00:04:59.014 "max_connections_per_session": 2, 00:04:59.014 "max_queue_depth": 64, 00:04:59.014 "default_time2wait": 2, 00:04:59.014 "default_time2retain": 20, 00:04:59.014 "first_burst_length": 8192, 00:04:59.014 "immediate_data": true, 00:04:59.014 "allow_duplicated_isid": false, 00:04:59.014 "error_recovery_level": 0, 00:04:59.014 "nop_timeout": 60, 00:04:59.014 "nop_in_interval": 30, 00:04:59.014 "disable_chap": false, 00:04:59.014 "require_chap": false, 00:04:59.014 "mutual_chap": false, 00:04:59.014 "chap_group": 0, 00:04:59.014 "max_large_datain_per_connection": 64, 00:04:59.014 "max_r2t_per_connection": 4, 00:04:59.014 "pdu_pool_size": 36864, 00:04:59.014 "immediate_data_pool_size": 16384, 00:04:59.014 "data_out_pool_size": 2048 00:04:59.014 } 00:04:59.014 } 00:04:59.014 ] 00:04:59.014 } 00:04:59.014 ] 00:04:59.014 } 00:04:59.014 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:59.014 15:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2814948 00:04:59.014 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2814948 ']' 00:04:59.014 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2814948 00:04:59.014 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:59.014 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.014 15:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814948 00:04:59.272 15:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.272 15:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.272 15:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814948' 00:04:59.272 killing process with pid 2814948 00:04:59.272 15:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2814948 00:04:59.272 15:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2814948 00:04:59.530 15:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2815062 00:04:59.530 15:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.530 15:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2815062 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2815062 ']' 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2815062 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815062 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815062' 00:05:04.811 killing process with pid 2815062 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2815062 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2815062 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.811 00:05:04.811 real 0m6.292s 00:05:04.811 user 0m5.963s 00:05:04.811 sys 0m0.615s 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.811 ************************************ 00:05:04.811 END TEST skip_rpc_with_json 00:05:04.811 ************************************ 00:05:04.811 15:22:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.811 15:22:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.811 15:22:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.811 15:22:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.811 ************************************ 00:05:04.811 START TEST skip_rpc_with_delay 00:05:04.811 ************************************ 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.811 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.812 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.071 [2024-12-06 15:22:10.819604] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.071 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:05.071 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.071 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.071 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.071 00:05:05.071 real 0m0.070s 00:05:05.071 user 0m0.046s 00:05:05.071 sys 0m0.024s 00:05:05.071 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.071 15:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:05.071 ************************************ 00:05:05.071 END TEST skip_rpc_with_delay 00:05:05.071 ************************************ 00:05:05.071 15:22:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.071 15:22:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.071 15:22:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.071 15:22:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.071 15:22:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.071 15:22:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.071 ************************************ 00:05:05.071 START TEST exit_on_failed_rpc_init 00:05:05.071 ************************************ 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2816039 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2816039 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2816039 ']' 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.071 15:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.071 [2024-12-06 15:22:10.956129] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:05.071 [2024-12-06 15:22:10.956172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816039 ] 00:05:05.071 [2024-12-06 15:22:11.031456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.330 [2024-12-06 15:22:11.074319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.330 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.331 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.331 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.331 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.331 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.331 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.589 [2024-12-06 15:22:11.348672] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:05.589 [2024-12-06 15:22:11.348718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816187 ] 00:05:05.589 [2024-12-06 15:22:11.421949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.590 [2024-12-06 15:22:11.462412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.590 [2024-12-06 15:22:11.462466] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:05.590 [2024-12-06 15:22:11.462475] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:05.590 [2024-12-06 15:22:11.462481] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2816039 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2816039 ']' 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2816039 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816039 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816039' 00:05:05.590 killing process with pid 2816039 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2816039 00:05:05.590 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2816039 00:05:06.157 00:05:06.157 real 0m0.955s 00:05:06.157 user 0m1.016s 00:05:06.157 sys 0m0.390s 00:05:06.157 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.158 15:22:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.158 ************************************ 00:05:06.158 END TEST exit_on_failed_rpc_init 00:05:06.158 ************************************ 00:05:06.158 15:22:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.158 00:05:06.158 real 0m13.149s 00:05:06.158 user 0m12.371s 00:05:06.158 sys 0m1.587s 00:05:06.158 15:22:11 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.158 15:22:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.158 ************************************ 00:05:06.158 END TEST skip_rpc 00:05:06.158 ************************************ 00:05:06.158 15:22:11 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.158 15:22:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.158 15:22:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.158 15:22:11 -- common/autotest_common.sh@10 -- # set +x 00:05:06.158 ************************************ 00:05:06.158 START TEST rpc_client 00:05:06.158 ************************************ 00:05:06.158 15:22:11 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.158 * Looking for test storage... 00:05:06.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.158 15:22:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.158 --rc genhtml_branch_coverage=1 00:05:06.158 --rc genhtml_function_coverage=1 00:05:06.158 --rc genhtml_legend=1 00:05:06.158 --rc geninfo_all_blocks=1 00:05:06.158 --rc geninfo_unexecuted_blocks=1 00:05:06.158 00:05:06.158 ' 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.158 --rc genhtml_branch_coverage=1 00:05:06.158 --rc genhtml_function_coverage=1 00:05:06.158 --rc genhtml_legend=1 00:05:06.158 --rc geninfo_all_blocks=1 00:05:06.158 --rc geninfo_unexecuted_blocks=1 00:05:06.158 00:05:06.158 ' 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.158 --rc genhtml_branch_coverage=1 00:05:06.158 --rc genhtml_function_coverage=1 00:05:06.158 --rc genhtml_legend=1 00:05:06.158 --rc geninfo_all_blocks=1 00:05:06.158 --rc geninfo_unexecuted_blocks=1 00:05:06.158 00:05:06.158 ' 00:05:06.158 15:22:12 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.158 --rc genhtml_branch_coverage=1 00:05:06.158 --rc genhtml_function_coverage=1 00:05:06.158 --rc genhtml_legend=1 00:05:06.158 --rc geninfo_all_blocks=1 00:05:06.158 --rc geninfo_unexecuted_blocks=1 00:05:06.158 00:05:06.158 ' 00:05:06.158 15:22:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:06.417 OK 00:05:06.417 15:22:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.417 00:05:06.417 real 0m0.200s 00:05:06.417 user 0m0.112s 00:05:06.417 sys 0m0.102s 00:05:06.417 15:22:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.417 15:22:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.417 ************************************ 00:05:06.417 END TEST rpc_client 00:05:06.417 ************************************ 00:05:06.417 15:22:12 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.417 15:22:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.418 15:22:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.418 15:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.418 ************************************ 00:05:06.418 START TEST json_config 00:05:06.418 ************************************ 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.418 15:22:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.418 15:22:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.418 15:22:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.418 15:22:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.418 15:22:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.418 15:22:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.418 15:22:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.418 15:22:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:06.418 15:22:12 json_config -- scripts/common.sh@345 -- # : 1 00:05:06.418 15:22:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.418 15:22:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.418 15:22:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:06.418 15:22:12 json_config -- scripts/common.sh@353 -- # local d=1 00:05:06.418 15:22:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.418 15:22:12 json_config -- scripts/common.sh@355 -- # echo 1 00:05:06.418 15:22:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.418 15:22:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@353 -- # local d=2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.418 15:22:12 json_config -- scripts/common.sh@355 -- # echo 2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.418 15:22:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.418 15:22:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.418 15:22:12 json_config -- scripts/common.sh@368 -- # return 0 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.418 --rc genhtml_branch_coverage=1 00:05:06.418 --rc genhtml_function_coverage=1 00:05:06.418 --rc genhtml_legend=1 00:05:06.418 --rc geninfo_all_blocks=1 00:05:06.418 --rc geninfo_unexecuted_blocks=1 00:05:06.418 00:05:06.418 ' 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.418 --rc genhtml_branch_coverage=1 00:05:06.418 --rc genhtml_function_coverage=1 00:05:06.418 --rc genhtml_legend=1 00:05:06.418 --rc geninfo_all_blocks=1 00:05:06.418 --rc geninfo_unexecuted_blocks=1 00:05:06.418 00:05:06.418 ' 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.418 --rc genhtml_branch_coverage=1 00:05:06.418 --rc genhtml_function_coverage=1 00:05:06.418 --rc genhtml_legend=1 00:05:06.418 --rc geninfo_all_blocks=1 00:05:06.418 --rc geninfo_unexecuted_blocks=1 00:05:06.418 00:05:06.418 ' 00:05:06.418 15:22:12 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.418 --rc genhtml_branch_coverage=1 00:05:06.418 --rc genhtml_function_coverage=1 00:05:06.418 --rc genhtml_legend=1 00:05:06.418 --rc geninfo_all_blocks=1 00:05:06.418 --rc geninfo_unexecuted_blocks=1 00:05:06.418 00:05:06.418 ' 00:05:06.418 15:22:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.418 15:22:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.418 15:22:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.418 15:22:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.418 15:22:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.418 15:22:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.418 15:22:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.418 15:22:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.418 15:22:12 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.418 15:22:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@51 -- # : 0 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.418 15:22:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:06.678 INFO: JSON configuration test init 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.678 15:22:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:06.678 15:22:12 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.678 15:22:12 json_config -- json_config/common.sh@10 -- # shift 00:05:06.678 15:22:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.678 15:22:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.678 15:22:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.678 15:22:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.678 15:22:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.678 15:22:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2816410 00:05:06.678 15:22:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.678 Waiting for target to run... 00:05:06.678 15:22:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:06.678 15:22:12 json_config -- json_config/common.sh@25 -- # waitforlisten 2816410 /var/tmp/spdk_tgt.sock 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 2816410 ']' 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.678 15:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.678 [2024-12-06 15:22:12.486068] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:06.678 [2024-12-06 15:22:12.486120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2816410 ] 00:05:06.937 [2024-12-06 15:22:12.775218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.937 [2024-12-06 15:22:12.809625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.504 15:22:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.504 15:22:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:07.504 15:22:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.504 00:05:07.504 15:22:13 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:07.504 15:22:13 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:07.504 15:22:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.504 15:22:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.504 15:22:13 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:07.504 15:22:13 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:07.504 15:22:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:07.504 15:22:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.504 15:22:13 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:07.504 15:22:13 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:07.505 15:22:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.790 15:22:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.790 15:22:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:10.790 15:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@54 -- # sort 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:10.790 15:22:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.790 15:22:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:10.790 15:22:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.790 15:22:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:10.790 15:22:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.790 15:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:11.050 MallocForNvmf0 00:05:11.050 15:22:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.050 15:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.316 MallocForNvmf1 00:05:11.316 15:22:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:11.316 15:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:11.316 [2024-12-06 15:22:17.250940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.316 15:22:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.316 15:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.574 15:22:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.574 15:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.833 15:22:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.833 15:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:12.091 15:22:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:12.091 15:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:12.091 [2024-12-06 15:22:18.001303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:12.091 15:22:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:12.091 15:22:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.091 15:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.091 15:22:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:12.091 15:22:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.091 15:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.091 15:22:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:12.091 15:22:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.091 15:22:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.349 MallocBdevForConfigChangeCheck 00:05:12.349 15:22:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:12.349 15:22:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:12.349 15:22:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.349 15:22:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:12.349 15:22:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.915 15:22:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:12.915 INFO: shutting down applications... 00:05:12.915 15:22:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:12.915 15:22:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:12.915 15:22:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:12.915 15:22:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:14.812 Calling clear_iscsi_subsystem 00:05:14.812 Calling clear_nvmf_subsystem 00:05:14.812 Calling clear_nbd_subsystem 00:05:14.812 Calling clear_ublk_subsystem 00:05:14.812 Calling clear_vhost_blk_subsystem 00:05:14.812 Calling clear_vhost_scsi_subsystem 00:05:14.812 Calling clear_bdev_subsystem 00:05:14.812 15:22:20 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:14.812 15:22:20 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:14.812 15:22:20 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:14.812 15:22:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.812 15:22:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:14.812 15:22:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:15.378 15:22:21 json_config -- json_config/json_config.sh@352 -- # break 00:05:15.378 15:22:21 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:15.378 15:22:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:15.378 15:22:21 json_config -- json_config/common.sh@31 -- # local app=target 00:05:15.378 15:22:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.378 15:22:21 json_config -- json_config/common.sh@35 -- # [[ -n 2816410 ]] 00:05:15.378 15:22:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2816410 00:05:15.378 15:22:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.378 15:22:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.378 15:22:21 json_config -- json_config/common.sh@41 -- # kill -0 2816410 00:05:15.378 15:22:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.945 15:22:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.945 15:22:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.945 15:22:21 json_config -- json_config/common.sh@41 -- # kill -0 2816410 00:05:15.945 15:22:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.945 15:22:21 json_config -- json_config/common.sh@43 -- # break 00:05:15.945 15:22:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.945 15:22:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.945 SPDK target shutdown done 00:05:15.945 15:22:21 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:15.945 INFO: relaunching applications... 00:05:15.945 15:22:21 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.945 15:22:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.945 15:22:21 json_config -- json_config/common.sh@10 -- # shift 00:05:15.945 15:22:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.945 15:22:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.945 15:22:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.945 15:22:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.945 15:22:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.945 15:22:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2818136 00:05:15.945 15:22:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.945 Waiting for target to run... 00:05:15.945 15:22:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.945 15:22:21 json_config -- json_config/common.sh@25 -- # waitforlisten 2818136 /var/tmp/spdk_tgt.sock 00:05:15.945 15:22:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 2818136 ']' 00:05:15.945 15:22:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.945 15:22:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.945 15:22:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.945 15:22:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.945 15:22:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.945 [2024-12-06 15:22:21.716630] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:15.945 [2024-12-06 15:22:21.716687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818136 ] 00:05:16.203 [2024-12-06 15:22:22.181165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.461 [2024-12-06 15:22:22.230837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.746 [2024-12-06 15:22:25.263168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.746 [2024-12-06 15:22:25.295522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.004 15:22:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.004 15:22:25 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:20.004 15:22:25 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.004 00:05:20.004 15:22:25 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:20.004 15:22:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:20.004 INFO: Checking if target configuration is the same... 00:05:20.004 15:22:25 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.004 15:22:25 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:20.004 15:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.004 + '[' 2 -ne 2 ']' 00:05:20.004 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.004 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.004 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.004 +++ basename /dev/fd/62 00:05:20.004 ++ mktemp /tmp/62.XXX 00:05:20.004 + tmp_file_1=/tmp/62.ryj 00:05:20.004 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.004 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.004 + tmp_file_2=/tmp/spdk_tgt_config.json.WFp 00:05:20.004 + ret=0 00:05:20.004 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.571 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.571 + diff -u /tmp/62.ryj /tmp/spdk_tgt_config.json.WFp 00:05:20.571 + echo 'INFO: JSON config files are the same' 00:05:20.571 INFO: JSON config files are the same 00:05:20.571 + rm /tmp/62.ryj /tmp/spdk_tgt_config.json.WFp 00:05:20.571 + exit 0 00:05:20.571 15:22:26 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:20.571 15:22:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:20.571 INFO: changing configuration and checking if this can be detected... 00:05:20.571 15:22:26 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.571 15:22:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.571 15:22:26 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:20.571 15:22:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.571 15:22:26 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.571 + '[' 2 -ne 2 ']' 00:05:20.571 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.571 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.571 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.571 +++ basename /dev/fd/62 00:05:20.571 ++ mktemp /tmp/62.XXX 00:05:20.571 + tmp_file_1=/tmp/62.0Mb 00:05:20.571 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.571 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.571 + tmp_file_2=/tmp/spdk_tgt_config.json.lak 00:05:20.571 + ret=0 00:05:20.571 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.140 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.140 + diff -u /tmp/62.0Mb /tmp/spdk_tgt_config.json.lak 00:05:21.140 + ret=1 00:05:21.140 + echo '=== Start of file: /tmp/62.0Mb ===' 00:05:21.140 + cat /tmp/62.0Mb 00:05:21.140 + echo '=== End of file: /tmp/62.0Mb ===' 00:05:21.140 + echo '' 00:05:21.140 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lak ===' 00:05:21.140 + cat /tmp/spdk_tgt_config.json.lak 00:05:21.140 + echo '=== End of file: /tmp/spdk_tgt_config.json.lak ===' 00:05:21.140 + echo '' 00:05:21.140 + rm /tmp/62.0Mb /tmp/spdk_tgt_config.json.lak 00:05:21.140 + exit 1 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:21.140 INFO: configuration change detected. 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@324 -- # [[ -n 2818136 ]] 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.140 15:22:26 json_config -- json_config/json_config.sh@330 -- # killprocess 2818136 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@954 -- # '[' -z 2818136 ']' 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@958 -- # kill -0 2818136 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@959 -- # uname 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.140 15:22:26 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2818136 00:05:21.140 15:22:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.140 15:22:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.140 15:22:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2818136' 00:05:21.140 killing process with pid 2818136 00:05:21.140 15:22:27 json_config -- common/autotest_common.sh@973 -- # kill 2818136 00:05:21.140 15:22:27 json_config -- common/autotest_common.sh@978 -- # wait 2818136 00:05:23.054 15:22:29 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.054 15:22:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:23.054 15:22:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.054 15:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 15:22:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:23.313 15:22:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:23.313 INFO: Success 00:05:23.313 00:05:23.313 real 0m16.834s 00:05:23.313 user 0m17.310s 00:05:23.313 sys 0m2.599s 00:05:23.313 15:22:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.313 15:22:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 ************************************ 00:05:23.313 END TEST json_config 00:05:23.313 ************************************ 00:05:23.313 15:22:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.313 15:22:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.313 15:22:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.313 15:22:29 -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 ************************************ 00:05:23.313 START TEST json_config_extra_key 00:05:23.313 ************************************ 00:05:23.313 15:22:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.313 15:22:29 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.313 15:22:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.313 15:22:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.313 15:22:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.313 15:22:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.313 15:22:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.313 15:22:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.313 15:22:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.314 15:22:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:23.314 15:22:29 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.314 15:22:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 15:22:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 15:22:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 15:22:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.314 --rc genhtml_branch_coverage=1 00:05:23.314 --rc genhtml_function_coverage=1 00:05:23.314 --rc genhtml_legend=1 00:05:23.314 --rc geninfo_all_blocks=1 00:05:23.314 --rc geninfo_unexecuted_blocks=1 00:05:23.314 00:05:23.314 ' 00:05:23.314 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.314 15:22:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.573 15:22:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.573 15:22:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.573 15:22:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.573 15:22:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.573 15:22:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.573 15:22:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.573 15:22:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.573 15:22:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.573 15:22:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.573 15:22:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.573 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.573 INFO: launching applications... 00:05:23.574 15:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2819638 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.574 Waiting for target to run... 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2819638 /var/tmp/spdk_tgt.sock 00:05:23.574 15:22:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.574 15:22:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2819638 ']' 00:05:23.574 15:22:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.574 15:22:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.574 15:22:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.574 15:22:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.574 15:22:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.574 [2024-12-06 15:22:29.376573] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:23.574 [2024-12-06 15:22:29.376623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819638 ] 00:05:23.833 [2024-12-06 15:22:29.657254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.833 [2024-12-06 15:22:29.691266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.401 15:22:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.401 15:22:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.401 00:05:24.401 15:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.401 INFO: shutting down applications... 00:05:24.401 15:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2819638 ]] 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2819638 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819638 00:05:24.401 15:22:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.968 15:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.968 15:22:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.968 15:22:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2819638 00:05:24.968 15:22:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.968 15:22:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:24.968 15:22:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.968 15:22:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.968 SPDK target shutdown done 00:05:24.968 15:22:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:24.968 Success 00:05:24.968 00:05:24.968 real 0m1.559s 00:05:24.968 user 0m1.336s 00:05:24.968 sys 0m0.396s 00:05:24.968 15:22:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.968 15:22:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.968 ************************************ 00:05:24.968 END TEST json_config_extra_key 00:05:24.968 ************************************ 00:05:24.968 15:22:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.968 15:22:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.968 15:22:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.968 15:22:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.968 ************************************ 00:05:24.968 START TEST alias_rpc 00:05:24.968 ************************************ 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.968 * Looking for test storage... 00:05:24.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.968 15:22:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.968 --rc genhtml_branch_coverage=1 00:05:24.968 --rc genhtml_function_coverage=1 00:05:24.968 --rc genhtml_legend=1 00:05:24.968 --rc geninfo_all_blocks=1 00:05:24.968 --rc geninfo_unexecuted_blocks=1 00:05:24.968 00:05:24.968 ' 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.968 --rc genhtml_branch_coverage=1 00:05:24.968 --rc genhtml_function_coverage=1 00:05:24.968 --rc genhtml_legend=1 00:05:24.968 --rc geninfo_all_blocks=1 00:05:24.968 --rc geninfo_unexecuted_blocks=1 00:05:24.968 00:05:24.968 ' 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.968 --rc genhtml_branch_coverage=1 00:05:24.968 --rc genhtml_function_coverage=1 00:05:24.968 --rc genhtml_legend=1 00:05:24.968 --rc geninfo_all_blocks=1 00:05:24.968 --rc geninfo_unexecuted_blocks=1 00:05:24.968 00:05:24.968 ' 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.968 --rc genhtml_branch_coverage=1 00:05:24.968 --rc genhtml_function_coverage=1 00:05:24.968 --rc genhtml_legend=1 00:05:24.968 --rc geninfo_all_blocks=1 00:05:24.968 --rc geninfo_unexecuted_blocks=1 00:05:24.968 00:05:24.968 ' 00:05:24.968 15:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.968 15:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2819926 00:05:24.968 15:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2819926 00:05:24.968 15:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2819926 ']' 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.968 15:22:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.969 15:22:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.969 15:22:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.227 [2024-12-06 15:22:31.002812] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:25.227 [2024-12-06 15:22:31.002856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2819926 ] 00:05:25.227 [2024-12-06 15:22:31.075270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.227 [2024-12-06 15:22:31.117510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.486 15:22:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.486 15:22:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.486 15:22:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:25.744 15:22:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2819926 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2819926 ']' 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2819926 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2819926 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2819926' 00:05:25.744 killing process with pid 2819926 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 2819926 00:05:25.744 15:22:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 2819926 00:05:26.002 00:05:26.002 real 0m1.131s 00:05:26.002 user 0m1.141s 00:05:26.002 sys 0m0.409s 00:05:26.002 15:22:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.002 15:22:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.002 ************************************ 00:05:26.003 END TEST alias_rpc 00:05:26.003 ************************************ 00:05:26.003 15:22:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:26.003 15:22:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.003 15:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.003 15:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.003 15:22:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.003 ************************************ 00:05:26.003 START TEST spdkcli_tcp 00:05:26.003 ************************************ 00:05:26.003 15:22:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:26.261 * Looking for test storage... 00:05:26.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.261 15:22:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.261 --rc genhtml_branch_coverage=1 00:05:26.261 --rc genhtml_function_coverage=1 00:05:26.261 --rc genhtml_legend=1 00:05:26.261 --rc geninfo_all_blocks=1 00:05:26.261 --rc geninfo_unexecuted_blocks=1 00:05:26.261 00:05:26.261 ' 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.261 --rc genhtml_branch_coverage=1 00:05:26.261 --rc genhtml_function_coverage=1 00:05:26.261 --rc genhtml_legend=1 00:05:26.261 --rc geninfo_all_blocks=1 00:05:26.261 --rc geninfo_unexecuted_blocks=1 00:05:26.261 00:05:26.261 ' 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.261 --rc genhtml_branch_coverage=1 00:05:26.261 --rc genhtml_function_coverage=1 00:05:26.261 --rc genhtml_legend=1 00:05:26.261 --rc geninfo_all_blocks=1 00:05:26.261 --rc geninfo_unexecuted_blocks=1 00:05:26.261 00:05:26.261 ' 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.261 --rc genhtml_branch_coverage=1 00:05:26.261 --rc genhtml_function_coverage=1 00:05:26.261 --rc genhtml_legend=1 00:05:26.261 --rc geninfo_all_blocks=1 00:05:26.261 --rc geninfo_unexecuted_blocks=1 00:05:26.261 00:05:26.261 ' 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2820219 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.261 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2820219 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2820219 ']' 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.261 15:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.261 [2024-12-06 15:22:32.211811] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:26.261 [2024-12-06 15:22:32.211859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820219 ] 00:05:26.519 [2024-12-06 15:22:32.285468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.519 [2024-12-06 15:22:32.326508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.519 [2024-12-06 15:22:32.326508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.791 15:22:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.791 15:22:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:26.791 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2820228 00:05:26.791 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.791 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.791 [ 00:05:26.791 "bdev_malloc_delete", 00:05:26.791 "bdev_malloc_create", 00:05:26.791 "bdev_null_resize", 00:05:26.791 "bdev_null_delete", 00:05:26.791 "bdev_null_create", 00:05:26.791 "bdev_nvme_cuse_unregister", 00:05:26.791 "bdev_nvme_cuse_register", 00:05:26.791 "bdev_opal_new_user", 00:05:26.791 "bdev_opal_set_lock_state", 00:05:26.791 "bdev_opal_delete", 00:05:26.791 "bdev_opal_get_info", 00:05:26.791 "bdev_opal_create", 00:05:26.791 "bdev_nvme_opal_revert", 00:05:26.791 "bdev_nvme_opal_init", 00:05:26.791 "bdev_nvme_send_cmd", 00:05:26.791 "bdev_nvme_set_keys", 00:05:26.791 "bdev_nvme_get_path_iostat", 00:05:26.791 "bdev_nvme_get_mdns_discovery_info", 00:05:26.791 "bdev_nvme_stop_mdns_discovery", 00:05:26.791 "bdev_nvme_start_mdns_discovery", 00:05:26.791 "bdev_nvme_set_multipath_policy", 00:05:26.791 "bdev_nvme_set_preferred_path", 00:05:26.791 "bdev_nvme_get_io_paths", 00:05:26.791 "bdev_nvme_remove_error_injection", 00:05:26.791 "bdev_nvme_add_error_injection", 00:05:26.791 "bdev_nvme_get_discovery_info", 00:05:26.791 "bdev_nvme_stop_discovery", 00:05:26.791 "bdev_nvme_start_discovery", 00:05:26.791 "bdev_nvme_get_controller_health_info", 00:05:26.791 "bdev_nvme_disable_controller", 00:05:26.791 "bdev_nvme_enable_controller", 00:05:26.791 "bdev_nvme_reset_controller", 00:05:26.791 "bdev_nvme_get_transport_statistics", 00:05:26.791 "bdev_nvme_apply_firmware", 00:05:26.791 "bdev_nvme_detach_controller", 00:05:26.791 "bdev_nvme_get_controllers", 00:05:26.791 "bdev_nvme_attach_controller", 00:05:26.791 "bdev_nvme_set_hotplug", 00:05:26.791 "bdev_nvme_set_options", 00:05:26.791 "bdev_passthru_delete", 00:05:26.791 "bdev_passthru_create", 00:05:26.791 "bdev_lvol_set_parent_bdev", 00:05:26.791 "bdev_lvol_set_parent", 00:05:26.791 "bdev_lvol_check_shallow_copy", 00:05:26.791 "bdev_lvol_start_shallow_copy", 00:05:26.791 "bdev_lvol_grow_lvstore", 00:05:26.791 "bdev_lvol_get_lvols", 00:05:26.791 "bdev_lvol_get_lvstores", 00:05:26.791 "bdev_lvol_delete", 00:05:26.791 "bdev_lvol_set_read_only", 00:05:26.791 "bdev_lvol_resize", 00:05:26.791 "bdev_lvol_decouple_parent", 00:05:26.791 "bdev_lvol_inflate", 00:05:26.791 "bdev_lvol_rename", 00:05:26.791 "bdev_lvol_clone_bdev", 00:05:26.791 "bdev_lvol_clone", 00:05:26.791 "bdev_lvol_snapshot", 00:05:26.791 "bdev_lvol_create", 00:05:26.791 "bdev_lvol_delete_lvstore", 00:05:26.791 "bdev_lvol_rename_lvstore", 00:05:26.791 "bdev_lvol_create_lvstore", 00:05:26.791 "bdev_raid_set_options", 00:05:26.791 "bdev_raid_remove_base_bdev", 00:05:26.791 "bdev_raid_add_base_bdev", 00:05:26.791 "bdev_raid_delete", 00:05:26.791 "bdev_raid_create", 00:05:26.791 "bdev_raid_get_bdevs", 00:05:26.791 "bdev_error_inject_error", 00:05:26.791 "bdev_error_delete", 00:05:26.791 "bdev_error_create", 00:05:26.791 "bdev_split_delete", 00:05:26.791 "bdev_split_create", 00:05:26.791 "bdev_delay_delete", 00:05:26.791 "bdev_delay_create", 00:05:26.791 "bdev_delay_update_latency", 00:05:26.791 "bdev_zone_block_delete", 00:05:26.791 "bdev_zone_block_create", 00:05:26.791 "blobfs_create", 00:05:26.791 "blobfs_detect", 00:05:26.791 "blobfs_set_cache_size", 00:05:26.791 "bdev_aio_delete", 00:05:26.791 "bdev_aio_rescan", 00:05:26.791 "bdev_aio_create", 00:05:26.791 "bdev_ftl_set_property", 00:05:26.791 "bdev_ftl_get_properties", 00:05:26.791 "bdev_ftl_get_stats", 00:05:26.791 "bdev_ftl_unmap", 00:05:26.791 "bdev_ftl_unload", 00:05:26.791 "bdev_ftl_delete", 00:05:26.791 "bdev_ftl_load", 00:05:26.791 "bdev_ftl_create", 00:05:26.791 "bdev_virtio_attach_controller", 00:05:26.791 "bdev_virtio_scsi_get_devices", 00:05:26.791 "bdev_virtio_detach_controller", 00:05:26.791 "bdev_virtio_blk_set_hotplug", 00:05:26.791 "bdev_iscsi_delete", 00:05:26.791 "bdev_iscsi_create", 00:05:26.791 "bdev_iscsi_set_options", 00:05:26.791 "accel_error_inject_error", 00:05:26.791 "ioat_scan_accel_module", 00:05:26.791 "dsa_scan_accel_module", 00:05:26.791 "iaa_scan_accel_module", 00:05:26.791 "vfu_virtio_create_fs_endpoint", 00:05:26.791 "vfu_virtio_create_scsi_endpoint", 00:05:26.791 "vfu_virtio_scsi_remove_target", 00:05:26.791 "vfu_virtio_scsi_add_target", 00:05:26.791 "vfu_virtio_create_blk_endpoint", 00:05:26.791 "vfu_virtio_delete_endpoint", 00:05:26.791 "keyring_file_remove_key", 00:05:26.791 "keyring_file_add_key", 00:05:26.791 "keyring_linux_set_options", 00:05:26.791 "fsdev_aio_delete", 00:05:26.791 "fsdev_aio_create", 00:05:26.791 "iscsi_get_histogram", 00:05:26.791 "iscsi_enable_histogram", 00:05:26.791 "iscsi_set_options", 00:05:26.791 "iscsi_get_auth_groups", 00:05:26.791 "iscsi_auth_group_remove_secret", 00:05:26.791 "iscsi_auth_group_add_secret", 00:05:26.791 "iscsi_delete_auth_group", 00:05:26.791 "iscsi_create_auth_group", 00:05:26.791 "iscsi_set_discovery_auth", 00:05:26.791 "iscsi_get_options", 00:05:26.791 "iscsi_target_node_request_logout", 00:05:26.791 "iscsi_target_node_set_redirect", 00:05:26.791 "iscsi_target_node_set_auth", 00:05:26.791 "iscsi_target_node_add_lun", 00:05:26.791 "iscsi_get_stats", 00:05:26.791 "iscsi_get_connections", 00:05:26.791 "iscsi_portal_group_set_auth", 00:05:26.791 "iscsi_start_portal_group", 00:05:26.791 "iscsi_delete_portal_group", 00:05:26.791 "iscsi_create_portal_group", 00:05:26.791 "iscsi_get_portal_groups", 00:05:26.791 "iscsi_delete_target_node", 00:05:26.791 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.791 "iscsi_target_node_add_pg_ig_maps", 00:05:26.791 "iscsi_create_target_node", 00:05:26.791 "iscsi_get_target_nodes", 00:05:26.791 "iscsi_delete_initiator_group", 00:05:26.791 "iscsi_initiator_group_remove_initiators", 00:05:26.791 "iscsi_initiator_group_add_initiators", 00:05:26.791 "iscsi_create_initiator_group", 00:05:26.791 "iscsi_get_initiator_groups", 00:05:26.791 "nvmf_set_crdt", 00:05:26.791 "nvmf_set_config", 00:05:26.791 "nvmf_set_max_subsystems", 00:05:26.791 "nvmf_stop_mdns_prr", 00:05:26.791 "nvmf_publish_mdns_prr", 00:05:26.791 "nvmf_subsystem_get_listeners", 00:05:26.791 "nvmf_subsystem_get_qpairs", 00:05:26.791 "nvmf_subsystem_get_controllers", 00:05:26.791 "nvmf_get_stats", 00:05:26.791 "nvmf_get_transports", 00:05:26.791 "nvmf_create_transport", 00:05:26.791 "nvmf_get_targets", 00:05:26.791 "nvmf_delete_target", 00:05:26.791 "nvmf_create_target", 00:05:26.791 "nvmf_subsystem_allow_any_host", 00:05:26.791 "nvmf_subsystem_set_keys", 00:05:26.791 "nvmf_subsystem_remove_host", 00:05:26.791 "nvmf_subsystem_add_host", 00:05:26.791 "nvmf_ns_remove_host", 00:05:26.791 "nvmf_ns_add_host", 00:05:26.791 "nvmf_subsystem_remove_ns", 00:05:26.791 "nvmf_subsystem_set_ns_ana_group", 00:05:26.791 "nvmf_subsystem_add_ns", 00:05:26.791 "nvmf_subsystem_listener_set_ana_state", 00:05:26.791 "nvmf_discovery_get_referrals", 00:05:26.791 "nvmf_discovery_remove_referral", 00:05:26.791 "nvmf_discovery_add_referral", 00:05:26.791 "nvmf_subsystem_remove_listener", 00:05:26.791 "nvmf_subsystem_add_listener", 00:05:26.791 "nvmf_delete_subsystem", 00:05:26.791 "nvmf_create_subsystem", 00:05:26.791 "nvmf_get_subsystems", 00:05:26.791 "env_dpdk_get_mem_stats", 00:05:26.791 "nbd_get_disks", 00:05:26.791 "nbd_stop_disk", 00:05:26.791 "nbd_start_disk", 00:05:26.791 "ublk_recover_disk", 00:05:26.791 "ublk_get_disks", 00:05:26.791 "ublk_stop_disk", 00:05:26.791 "ublk_start_disk", 00:05:26.791 "ublk_destroy_target", 00:05:26.791 "ublk_create_target", 00:05:26.791 "virtio_blk_create_transport", 00:05:26.791 "virtio_blk_get_transports", 00:05:26.791 "vhost_controller_set_coalescing", 00:05:26.791 "vhost_get_controllers", 00:05:26.791 "vhost_delete_controller", 00:05:26.791 "vhost_create_blk_controller", 00:05:26.791 "vhost_scsi_controller_remove_target", 00:05:26.791 "vhost_scsi_controller_add_target", 00:05:26.791 "vhost_start_scsi_controller", 00:05:26.791 "vhost_create_scsi_controller", 00:05:26.791 "thread_set_cpumask", 00:05:26.791 "scheduler_set_options", 00:05:26.791 "framework_get_governor", 00:05:26.791 "framework_get_scheduler", 00:05:26.791 "framework_set_scheduler", 00:05:26.791 "framework_get_reactors", 00:05:26.791 "thread_get_io_channels", 00:05:26.791 "thread_get_pollers", 00:05:26.791 "thread_get_stats", 00:05:26.791 "framework_monitor_context_switch", 00:05:26.791 "spdk_kill_instance", 00:05:26.791 "log_enable_timestamps", 00:05:26.791 "log_get_flags", 00:05:26.791 "log_clear_flag", 00:05:26.791 "log_set_flag", 00:05:26.791 "log_get_level", 00:05:26.791 "log_set_level", 00:05:26.791 "log_get_print_level", 00:05:26.791 "log_set_print_level", 00:05:26.791 "framework_enable_cpumask_locks", 00:05:26.791 "framework_disable_cpumask_locks", 00:05:26.791 "framework_wait_init", 00:05:26.791 "framework_start_init", 00:05:26.791 "scsi_get_devices", 00:05:26.791 "bdev_get_histogram", 00:05:26.791 "bdev_enable_histogram", 00:05:26.791 "bdev_set_qos_limit", 00:05:26.791 "bdev_set_qd_sampling_period", 00:05:26.791 "bdev_get_bdevs", 00:05:26.791 "bdev_reset_iostat", 00:05:26.791 "bdev_get_iostat", 00:05:26.791 "bdev_examine", 00:05:26.791 "bdev_wait_for_examine", 00:05:26.791 "bdev_set_options", 00:05:26.791 "accel_get_stats", 00:05:26.791 "accel_set_options", 00:05:26.791 "accel_set_driver", 00:05:26.791 "accel_crypto_key_destroy", 00:05:26.791 "accel_crypto_keys_get", 00:05:26.791 "accel_crypto_key_create", 00:05:26.791 "accel_assign_opc", 00:05:26.791 "accel_get_module_info", 00:05:26.791 "accel_get_opc_assignments", 00:05:26.791 "vmd_rescan", 00:05:26.791 "vmd_remove_device", 00:05:26.791 "vmd_enable", 00:05:26.791 "sock_get_default_impl", 00:05:26.791 "sock_set_default_impl", 00:05:26.791 "sock_impl_set_options", 00:05:26.791 "sock_impl_get_options", 00:05:26.791 "iobuf_get_stats", 00:05:26.791 "iobuf_set_options", 00:05:26.791 "keyring_get_keys", 00:05:26.791 "vfu_tgt_set_base_path", 00:05:26.791 "framework_get_pci_devices", 00:05:26.791 "framework_get_config", 00:05:26.791 "framework_get_subsystems", 00:05:26.791 "fsdev_set_opts", 00:05:26.791 "fsdev_get_opts", 00:05:26.791 "trace_get_info", 00:05:26.791 "trace_get_tpoint_group_mask", 00:05:26.791 "trace_disable_tpoint_group", 00:05:26.791 "trace_enable_tpoint_group", 00:05:26.791 "trace_clear_tpoint_mask", 00:05:26.791 "trace_set_tpoint_mask", 00:05:26.791 "notify_get_notifications", 00:05:26.791 "notify_get_types", 00:05:26.791 "spdk_get_version", 00:05:26.791 "rpc_get_methods" 00:05:26.791 ] 00:05:26.791 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.791 15:22:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.791 15:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.791 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.791 15:22:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2820219 00:05:26.791 15:22:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2820219 ']' 00:05:26.791 15:22:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2820219 00:05:26.791 15:22:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:27.050 15:22:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.050 15:22:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820219 00:05:27.050 15:22:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.050 15:22:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.050 15:22:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820219' 00:05:27.050 killing process with pid 2820219 00:05:27.050 15:22:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2820219 00:05:27.050 15:22:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2820219 00:05:27.309 00:05:27.309 real 0m1.166s 00:05:27.309 user 0m1.943s 00:05:27.309 sys 0m0.450s 00:05:27.309 15:22:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.309 15:22:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.309 ************************************ 00:05:27.309 END TEST spdkcli_tcp 00:05:27.309 ************************************ 00:05:27.309 15:22:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.309 15:22:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.309 15:22:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.309 15:22:33 -- common/autotest_common.sh@10 -- # set +x 00:05:27.309 ************************************ 00:05:27.309 START TEST dpdk_mem_utility 00:05:27.309 ************************************ 00:05:27.309 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.309 * Looking for test storage... 00:05:27.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:27.309 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.309 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.309 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.567 15:22:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.567 --rc genhtml_branch_coverage=1 00:05:27.567 --rc genhtml_function_coverage=1 00:05:27.567 --rc genhtml_legend=1 00:05:27.567 --rc geninfo_all_blocks=1 00:05:27.567 --rc geninfo_unexecuted_blocks=1 00:05:27.567 00:05:27.567 ' 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.567 --rc genhtml_branch_coverage=1 00:05:27.567 --rc genhtml_function_coverage=1 00:05:27.567 --rc genhtml_legend=1 00:05:27.567 --rc geninfo_all_blocks=1 00:05:27.567 --rc geninfo_unexecuted_blocks=1 00:05:27.567 00:05:27.567 ' 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.567 --rc genhtml_branch_coverage=1 00:05:27.567 --rc genhtml_function_coverage=1 00:05:27.567 --rc genhtml_legend=1 00:05:27.567 --rc geninfo_all_blocks=1 00:05:27.567 --rc geninfo_unexecuted_blocks=1 00:05:27.567 00:05:27.567 ' 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.567 --rc genhtml_branch_coverage=1 00:05:27.567 --rc genhtml_function_coverage=1 00:05:27.567 --rc genhtml_legend=1 00:05:27.567 --rc geninfo_all_blocks=1 00:05:27.567 --rc geninfo_unexecuted_blocks=1 00:05:27.567 00:05:27.567 ' 00:05:27.567 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.567 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2820518 00:05:27.567 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2820518 00:05:27.567 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2820518 ']' 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.567 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.567 [2024-12-06 15:22:33.433039] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:27.567 [2024-12-06 15:22:33.433091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820518 ] 00:05:27.567 [2024-12-06 15:22:33.508282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.567 [2024-12-06 15:22:33.550020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.824 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.824 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:27.824 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.824 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.824 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.824 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.824 { 00:05:27.824 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.824 } 00:05:27.824 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.824 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.824 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:27.824 1 heaps totaling size 818.000000 MiB 00:05:27.824 size: 818.000000 MiB heap id: 0 00:05:27.824 end heaps---------- 00:05:27.824 9 mempools totaling size 603.782043 MiB 00:05:27.824 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.824 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.824 size: 100.555481 MiB name: bdev_io_2820518 00:05:27.824 size: 50.003479 MiB name: msgpool_2820518 00:05:27.824 size: 36.509338 MiB name: fsdev_io_2820518 00:05:27.824 size: 21.763794 MiB name: PDU_Pool 00:05:27.824 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.824 size: 4.133484 MiB name: evtpool_2820518 00:05:27.824 size: 0.026123 MiB name: Session_Pool 00:05:27.824 end mempools------- 00:05:27.824 6 memzones totaling size 4.142822 MiB 00:05:27.824 size: 1.000366 MiB name: RG_ring_0_2820518 00:05:27.824 size: 1.000366 MiB name: RG_ring_1_2820518 00:05:27.824 size: 1.000366 MiB name: RG_ring_4_2820518 00:05:27.824 size: 1.000366 MiB name: RG_ring_5_2820518 00:05:27.824 size: 0.125366 MiB name: RG_ring_2_2820518 00:05:27.824 size: 0.015991 MiB name: RG_ring_3_2820518 00:05:27.824 end memzones------- 00:05:28.086 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.086 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:28.086 list of free elements. size: 10.852478 MiB 00:05:28.086 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:28.086 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:28.086 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:28.086 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:28.086 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:28.086 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:28.086 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:28.086 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:28.086 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:28.086 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:28.086 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:28.086 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:28.086 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:28.086 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:28.086 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:28.086 list of standard malloc elements. size: 199.218628 MiB 00:05:28.086 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:28.086 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:28.086 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:28.086 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:28.086 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:28.086 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.086 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:28.086 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.086 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:28.086 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:28.086 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:28.086 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:28.086 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:28.086 list of memzone associated elements. size: 607.928894 MiB 00:05:28.086 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:28.086 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.086 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:28.086 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.086 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:28.086 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2820518_0 00:05:28.086 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:28.086 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2820518_0 00:05:28.086 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:28.086 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2820518_0 00:05:28.086 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:28.086 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.087 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:28.087 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.087 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:28.087 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2820518_0 00:05:28.087 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:28.087 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2820518 00:05:28.087 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.087 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2820518 00:05:28.087 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:28.087 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.087 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:28.087 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.087 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:28.087 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.087 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:28.087 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.087 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:28.087 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2820518 00:05:28.087 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:28.087 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2820518 00:05:28.087 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:28.087 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2820518 00:05:28.087 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:28.087 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2820518 00:05:28.087 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:28.087 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2820518 00:05:28.087 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:28.087 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2820518 00:05:28.087 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:28.087 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.087 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:28.087 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.087 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:28.087 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.087 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:28.087 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2820518 00:05:28.087 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:28.087 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2820518 00:05:28.087 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:28.087 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.087 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:28.087 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.087 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:28.087 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2820518 00:05:28.087 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:28.087 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.087 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:28.087 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2820518 00:05:28.087 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:28.087 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2820518 00:05:28.087 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:28.087 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2820518 00:05:28.087 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:28.087 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.087 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.087 15:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2820518 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2820518 ']' 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2820518 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820518 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820518' 00:05:28.087 killing process with pid 2820518 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2820518 00:05:28.087 15:22:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2820518 00:05:28.425 00:05:28.425 real 0m1.015s 00:05:28.425 user 0m0.952s 00:05:28.425 sys 0m0.399s 00:05:28.425 15:22:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.425 15:22:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 ************************************ 00:05:28.425 END TEST dpdk_mem_utility 00:05:28.425 ************************************ 00:05:28.425 15:22:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.425 15:22:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.425 15:22:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.425 15:22:34 -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 ************************************ 00:05:28.425 START TEST event 00:05:28.425 ************************************ 00:05:28.425 15:22:34 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.425 * Looking for test storage... 00:05:28.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.425 15:22:34 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.425 15:22:34 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.425 15:22:34 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.756 15:22:34 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.756 15:22:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.756 15:22:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.756 15:22:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.756 15:22:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.756 15:22:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.756 15:22:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.756 15:22:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.756 15:22:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.756 15:22:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.756 15:22:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.756 15:22:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.756 15:22:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:28.756 15:22:34 event -- scripts/common.sh@345 -- # : 1 00:05:28.756 15:22:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.756 15:22:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.756 15:22:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:28.757 15:22:34 event -- scripts/common.sh@353 -- # local d=1 00:05:28.757 15:22:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.757 15:22:34 event -- scripts/common.sh@355 -- # echo 1 00:05:28.757 15:22:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.757 15:22:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:28.757 15:22:34 event -- scripts/common.sh@353 -- # local d=2 00:05:28.757 15:22:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.757 15:22:34 event -- scripts/common.sh@355 -- # echo 2 00:05:28.757 15:22:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.757 15:22:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.757 15:22:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.757 15:22:34 event -- scripts/common.sh@368 -- # return 0 00:05:28.757 15:22:34 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.757 15:22:34 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.757 --rc genhtml_branch_coverage=1 00:05:28.757 --rc genhtml_function_coverage=1 00:05:28.757 --rc genhtml_legend=1 00:05:28.757 --rc geninfo_all_blocks=1 00:05:28.757 --rc geninfo_unexecuted_blocks=1 00:05:28.757 00:05:28.757 ' 00:05:28.757 15:22:34 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.757 --rc genhtml_branch_coverage=1 00:05:28.757 --rc genhtml_function_coverage=1 00:05:28.757 --rc genhtml_legend=1 00:05:28.757 --rc geninfo_all_blocks=1 00:05:28.757 --rc geninfo_unexecuted_blocks=1 00:05:28.757 00:05:28.757 ' 00:05:28.757 15:22:34 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.757 --rc genhtml_branch_coverage=1 00:05:28.757 --rc genhtml_function_coverage=1 00:05:28.757 --rc genhtml_legend=1 00:05:28.757 --rc geninfo_all_blocks=1 00:05:28.757 --rc geninfo_unexecuted_blocks=1 00:05:28.757 00:05:28.757 ' 00:05:28.757 15:22:34 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.757 --rc genhtml_branch_coverage=1 00:05:28.757 --rc genhtml_function_coverage=1 00:05:28.757 --rc genhtml_legend=1 00:05:28.757 --rc geninfo_all_blocks=1 00:05:28.757 --rc geninfo_unexecuted_blocks=1 00:05:28.757 00:05:28.757 ' 00:05:28.757 15:22:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.757 15:22:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.757 15:22:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.757 15:22:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:28.757 15:22:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.757 15:22:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.757 ************************************ 00:05:28.757 START TEST event_perf 00:05:28.757 ************************************ 00:05:28.757 15:22:34 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.757 Running I/O for 1 seconds...[2024-12-06 15:22:34.518412] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:28.757 [2024-12-06 15:22:34.518480] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820648 ] 00:05:28.757 [2024-12-06 15:22:34.597474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.757 [2024-12-06 15:22:34.641268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.757 [2024-12-06 15:22:34.641394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.757 [2024-12-06 15:22:34.641459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.757 [2024-12-06 15:22:34.641459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.717 Running I/O for 1 seconds... 00:05:29.717 lcore 0: 205367 00:05:29.717 lcore 1: 205367 00:05:29.717 lcore 2: 205367 00:05:29.717 lcore 3: 205368 00:05:29.717 done. 00:05:29.717 00:05:29.717 real 0m1.185s 00:05:29.717 user 0m4.099s 00:05:29.717 sys 0m0.084s 00:05:29.717 15:22:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.717 15:22:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.717 ************************************ 00:05:29.717 END TEST event_perf 00:05:29.717 ************************************ 00:05:29.976 15:22:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.976 15:22:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:29.976 15:22:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.976 15:22:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.976 ************************************ 00:05:29.976 START TEST event_reactor 00:05:29.976 ************************************ 00:05:29.976 15:22:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.976 [2024-12-06 15:22:35.776920] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:29.976 [2024-12-06 15:22:35.776976] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820862 ] 00:05:29.976 [2024-12-06 15:22:35.855262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.976 [2024-12-06 15:22:35.895313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.354 test_start 00:05:31.354 oneshot 00:05:31.354 tick 100 00:05:31.354 tick 100 00:05:31.354 tick 250 00:05:31.354 tick 100 00:05:31.354 tick 100 00:05:31.354 tick 250 00:05:31.354 tick 100 00:05:31.354 tick 500 00:05:31.354 tick 100 00:05:31.354 tick 100 00:05:31.354 tick 250 00:05:31.354 tick 100 00:05:31.354 tick 100 00:05:31.354 test_end 00:05:31.354 00:05:31.354 real 0m1.179s 00:05:31.354 user 0m1.105s 00:05:31.354 sys 0m0.070s 00:05:31.354 15:22:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.354 15:22:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.354 ************************************ 00:05:31.354 END TEST event_reactor 00:05:31.354 ************************************ 00:05:31.354 15:22:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.354 15:22:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:31.354 15:22:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.354 15:22:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.354 ************************************ 00:05:31.354 START TEST event_reactor_perf 00:05:31.354 ************************************ 00:05:31.354 15:22:37 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.354 [2024-12-06 15:22:37.027976] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:31.354 [2024-12-06 15:22:37.028045] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821111 ] 00:05:31.354 [2024-12-06 15:22:37.107524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.354 [2024-12-06 15:22:37.146015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.290 test_start 00:05:32.290 test_end 00:05:32.290 Performance: 510042 events per second 00:05:32.290 00:05:32.290 real 0m1.179s 00:05:32.290 user 0m1.096s 00:05:32.290 sys 0m0.079s 00:05:32.290 15:22:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.290 15:22:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.290 ************************************ 00:05:32.290 END TEST event_reactor_perf 00:05:32.290 ************************************ 00:05:32.290 15:22:38 event -- event/event.sh@49 -- # uname -s 00:05:32.290 15:22:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.290 15:22:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.290 15:22:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.290 15:22:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.290 15:22:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.290 ************************************ 00:05:32.290 START TEST event_scheduler 00:05:32.290 ************************************ 00:05:32.290 15:22:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.549 * Looking for test storage... 00:05:32.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.549 15:22:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.549 --rc genhtml_branch_coverage=1 00:05:32.549 --rc genhtml_function_coverage=1 00:05:32.549 --rc genhtml_legend=1 00:05:32.549 --rc geninfo_all_blocks=1 00:05:32.549 --rc geninfo_unexecuted_blocks=1 00:05:32.549 00:05:32.549 ' 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.549 --rc genhtml_branch_coverage=1 00:05:32.549 --rc genhtml_function_coverage=1 00:05:32.549 --rc genhtml_legend=1 00:05:32.549 --rc geninfo_all_blocks=1 00:05:32.549 --rc geninfo_unexecuted_blocks=1 00:05:32.549 00:05:32.549 ' 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.549 --rc genhtml_branch_coverage=1 00:05:32.549 --rc genhtml_function_coverage=1 00:05:32.549 --rc genhtml_legend=1 00:05:32.549 --rc geninfo_all_blocks=1 00:05:32.549 --rc geninfo_unexecuted_blocks=1 00:05:32.549 00:05:32.549 ' 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.549 --rc genhtml_branch_coverage=1 00:05:32.549 --rc genhtml_function_coverage=1 00:05:32.549 --rc genhtml_legend=1 00:05:32.549 --rc geninfo_all_blocks=1 00:05:32.549 --rc geninfo_unexecuted_blocks=1 00:05:32.549 00:05:32.549 ' 00:05:32.549 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.549 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2821400 00:05:32.549 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.549 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.549 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2821400 00:05:32.549 15:22:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2821400 ']' 00:05:32.550 15:22:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.550 15:22:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.550 15:22:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.550 15:22:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.550 15:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.550 [2024-12-06 15:22:38.479071] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:32.550 [2024-12-06 15:22:38.479114] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2821400 ] 00:05:32.809 [2024-12-06 15:22:38.554742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.809 [2024-12-06 15:22:38.597539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.809 [2024-12-06 15:22:38.597650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.809 [2024-12-06 15:22:38.597754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.809 [2024-12-06 15:22:38.597756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:32.809 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 [2024-12-06 15:22:38.642327] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:32.809 [2024-12-06 15:22:38.642344] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.809 [2024-12-06 15:22:38.642353] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.809 [2024-12-06 15:22:38.642359] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.809 [2024-12-06 15:22:38.642364] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.809 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 [2024-12-06 15:22:38.717566] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.809 15:22:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 ************************************ 00:05:32.809 START TEST scheduler_create_thread 00:05:32.809 ************************************ 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 2 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 3 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 4 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.809 5 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.809 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.068 6 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.068 7 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.068 8 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.068 9 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.068 10 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.068 15:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.004 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.004 15:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.004 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.004 15:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.378 15:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.378 15:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.378 15:22:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.378 15:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.378 15:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.314 15:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.314 00:05:36.314 real 0m3.380s 00:05:36.314 user 0m0.024s 00:05:36.314 sys 0m0.006s 00:05:36.314 15:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.314 15:22:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.314 ************************************ 00:05:36.314 END TEST scheduler_create_thread 00:05:36.314 ************************************ 00:05:36.314 15:22:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.314 15:22:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2821400 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2821400 ']' 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2821400 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2821400 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2821400' 00:05:36.314 killing process with pid 2821400 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2821400 00:05:36.314 15:22:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2821400 00:05:36.572 [2024-12-06 15:22:42.513822] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.831 00:05:36.831 real 0m4.455s 00:05:36.831 user 0m7.794s 00:05:36.831 sys 0m0.385s 00:05:36.831 15:22:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.831 15:22:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.831 ************************************ 00:05:36.831 END TEST event_scheduler 00:05:36.831 ************************************ 00:05:36.831 15:22:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.831 15:22:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.831 15:22:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.831 15:22:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.831 15:22:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.831 ************************************ 00:05:36.831 START TEST app_repeat 00:05:36.831 ************************************ 00:05:36.831 15:22:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2822238 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2822238' 00:05:36.831 Process app_repeat pid: 2822238 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:36.831 spdk_app_start Round 0 00:05:36.831 15:22:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822238 /var/tmp/spdk-nbd.sock 00:05:36.831 15:22:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822238 ']' 00:05:36.831 15:22:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.831 15:22:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.831 15:22:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.831 15:22:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.831 15:22:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.089 [2024-12-06 15:22:42.833016] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:37.089 [2024-12-06 15:22:42.833071] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822238 ] 00:05:37.089 [2024-12-06 15:22:42.910387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.089 [2024-12-06 15:22:42.951625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.089 [2024-12-06 15:22:42.951626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.089 15:22:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.089 15:22:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.089 15:22:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.348 Malloc0 00:05:37.348 15:22:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.606 Malloc1 00:05:37.606 15:22:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.606 15:22:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.919 /dev/nbd0 00:05:37.919 15:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.919 15:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.919 1+0 records in 00:05:37.919 1+0 records out 00:05:37.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195564 s, 20.9 MB/s 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:37.919 15:22:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:37.919 15:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.919 15:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.920 15:22:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.920 /dev/nbd1 00:05:38.182 15:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.182 15:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.182 1+0 records in 00:05:38.182 1+0 records out 00:05:38.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226216 s, 18.1 MB/s 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.182 15:22:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.183 15:22:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.183 15:22:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.183 15:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.183 15:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.183 15:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.183 15:22:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.183 15:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.183 15:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.183 { 00:05:38.183 "nbd_device": "/dev/nbd0", 00:05:38.183 "bdev_name": "Malloc0" 00:05:38.183 }, 00:05:38.183 { 00:05:38.183 "nbd_device": "/dev/nbd1", 00:05:38.183 "bdev_name": "Malloc1" 00:05:38.183 } 00:05:38.183 ]' 00:05:38.183 15:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.183 { 00:05:38.183 "nbd_device": "/dev/nbd0", 00:05:38.183 "bdev_name": "Malloc0" 00:05:38.183 }, 00:05:38.183 { 00:05:38.183 "nbd_device": "/dev/nbd1", 00:05:38.183 "bdev_name": "Malloc1" 00:05:38.183 } 00:05:38.183 ]' 00:05:38.183 15:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.441 /dev/nbd1' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.441 /dev/nbd1' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.441 256+0 records in 00:05:38.441 256+0 records out 00:05:38.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106087 s, 98.8 MB/s 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.441 256+0 records in 00:05:38.441 256+0 records out 00:05:38.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013825 s, 75.8 MB/s 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.441 256+0 records in 00:05:38.441 256+0 records out 00:05:38.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145886 s, 71.9 MB/s 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.441 15:22:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.699 15:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.957 15:22:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.957 15:22:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.214 15:22:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.472 [2024-12-06 15:22:45.306787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.472 [2024-12-06 15:22:45.343527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.472 [2024-12-06 15:22:45.343528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.472 [2024-12-06 15:22:45.384300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.472 [2024-12-06 15:22:45.384338] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.756 15:22:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:42.756 15:22:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:42.756 spdk_app_start Round 1 00:05:42.756 15:22:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822238 /var/tmp/spdk-nbd.sock 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822238 ']' 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.756 15:22:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:42.756 15:22:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.756 Malloc0 00:05:42.756 15:22:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.015 Malloc1 00:05:43.015 15:22:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.015 15:22:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.015 /dev/nbd0 00:05:43.274 15:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.274 15:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.274 1+0 records in 00:05:43.274 1+0 records out 00:05:43.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186839 s, 21.9 MB/s 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.274 15:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.274 15:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.274 15:22:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.274 /dev/nbd1 00:05:43.274 15:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.274 15:22:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.274 15:22:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.533 1+0 records in 00:05:43.533 1+0 records out 00:05:43.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235261 s, 17.4 MB/s 00:05:43.533 15:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.533 15:22:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.533 15:22:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.533 15:22:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.533 15:22:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.533 15:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.533 15:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.533 15:22:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.533 15:22:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.534 { 00:05:43.534 "nbd_device": "/dev/nbd0", 00:05:43.534 "bdev_name": "Malloc0" 00:05:43.534 }, 00:05:43.534 { 00:05:43.534 "nbd_device": "/dev/nbd1", 00:05:43.534 "bdev_name": "Malloc1" 00:05:43.534 } 00:05:43.534 ]' 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.534 { 00:05:43.534 "nbd_device": "/dev/nbd0", 00:05:43.534 "bdev_name": "Malloc0" 00:05:43.534 }, 00:05:43.534 { 00:05:43.534 "nbd_device": "/dev/nbd1", 00:05:43.534 "bdev_name": "Malloc1" 00:05:43.534 } 00:05:43.534 ]' 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.534 /dev/nbd1' 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.534 /dev/nbd1' 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.534 15:22:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.792 256+0 records in 00:05:43.792 256+0 records out 00:05:43.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107001 s, 98.0 MB/s 00:05:43.792 15:22:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.793 256+0 records in 00:05:43.793 256+0 records out 00:05:43.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139668 s, 75.1 MB/s 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.793 256+0 records in 00:05:43.793 256+0 records out 00:05:43.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147472 s, 71.1 MB/s 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.793 15:22:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.051 15:22:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.051 15:22:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.051 15:22:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.052 15:22:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.311 15:22:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.311 15:22:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.571 15:22:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.830 [2024-12-06 15:22:50.618314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.831 [2024-12-06 15:22:50.655422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.831 [2024-12-06 15:22:50.655423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.831 [2024-12-06 15:22:50.697108] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.831 [2024-12-06 15:22:50.697151] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.118 15:22:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.118 15:22:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.118 spdk_app_start Round 2 00:05:48.118 15:22:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2822238 /var/tmp/spdk-nbd.sock 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822238 ']' 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.118 15:22:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.118 15:22:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.118 Malloc0 00:05:48.118 15:22:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.118 Malloc1 00:05:48.118 15:22:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.118 15:22:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.376 /dev/nbd0 00:05:48.376 15:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.376 15:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.376 1+0 records in 00:05:48.376 1+0 records out 00:05:48.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183595 s, 22.3 MB/s 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.376 15:22:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.377 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.377 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.377 15:22:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.635 /dev/nbd1 00:05:48.635 15:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.635 15:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.635 1+0 records in 00:05:48.635 1+0 records out 00:05:48.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188214 s, 21.8 MB/s 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.635 15:22:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.635 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.635 15:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.635 15:22:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.635 15:22:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.635 15:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.893 { 00:05:48.893 "nbd_device": "/dev/nbd0", 00:05:48.893 "bdev_name": "Malloc0" 00:05:48.893 }, 00:05:48.893 { 00:05:48.893 "nbd_device": "/dev/nbd1", 00:05:48.893 "bdev_name": "Malloc1" 00:05:48.893 } 00:05:48.893 ]' 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.893 { 00:05:48.893 "nbd_device": "/dev/nbd0", 00:05:48.893 "bdev_name": "Malloc0" 00:05:48.893 }, 00:05:48.893 { 00:05:48.893 "nbd_device": "/dev/nbd1", 00:05:48.893 "bdev_name": "Malloc1" 00:05:48.893 } 00:05:48.893 ]' 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.893 /dev/nbd1' 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.893 /dev/nbd1' 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.893 256+0 records in 00:05:48.893 256+0 records out 00:05:48.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998756 s, 105 MB/s 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.893 256+0 records in 00:05:48.893 256+0 records out 00:05:48.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138149 s, 75.9 MB/s 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.893 15:22:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.151 256+0 records in 00:05:49.151 256+0 records out 00:05:49.151 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149031 s, 70.4 MB/s 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.151 15:22:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.151 15:22:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.408 15:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.666 15:22:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.666 15:22:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.925 15:22:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.183 [2024-12-06 15:22:55.960876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.183 [2024-12-06 15:22:55.997874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.183 [2024-12-06 15:22:55.997875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.183 [2024-12-06 15:22:56.038938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.183 [2024-12-06 15:22:56.038978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.465 15:22:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2822238 /var/tmp/spdk-nbd.sock 00:05:53.465 15:22:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2822238 ']' 00:05:53.465 15:22:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.465 15:22:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.465 15:22:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.465 15:22:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.465 15:22:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.465 15:22:59 event.app_repeat -- event/event.sh@39 -- # killprocess 2822238 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2822238 ']' 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2822238 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2822238 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2822238' 00:05:53.465 killing process with pid 2822238 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2822238 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2822238 00:05:53.465 spdk_app_start is called in Round 0. 00:05:53.465 Shutdown signal received, stop current app iteration 00:05:53.465 Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 reinitialization... 00:05:53.465 spdk_app_start is called in Round 1. 00:05:53.465 Shutdown signal received, stop current app iteration 00:05:53.465 Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 reinitialization... 00:05:53.465 spdk_app_start is called in Round 2. 00:05:53.465 Shutdown signal received, stop current app iteration 00:05:53.465 Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 reinitialization... 00:05:53.465 spdk_app_start is called in Round 3. 00:05:53.465 Shutdown signal received, stop current app iteration 00:05:53.465 15:22:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.465 15:22:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.465 00:05:53.465 real 0m16.412s 00:05:53.465 user 0m36.039s 00:05:53.465 sys 0m2.653s 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.465 15:22:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.465 ************************************ 00:05:53.465 END TEST app_repeat 00:05:53.465 ************************************ 00:05:53.465 15:22:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.465 15:22:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.465 15:22:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.465 15:22:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.465 15:22:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.465 ************************************ 00:05:53.465 START TEST cpu_locks 00:05:53.465 ************************************ 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:53.465 * Looking for test storage... 00:05:53.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.465 15:22:59 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.465 --rc genhtml_branch_coverage=1 00:05:53.465 --rc genhtml_function_coverage=1 00:05:53.465 --rc genhtml_legend=1 00:05:53.465 --rc geninfo_all_blocks=1 00:05:53.465 --rc geninfo_unexecuted_blocks=1 00:05:53.465 00:05:53.465 ' 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.465 --rc genhtml_branch_coverage=1 00:05:53.465 --rc genhtml_function_coverage=1 00:05:53.465 --rc genhtml_legend=1 00:05:53.465 --rc geninfo_all_blocks=1 00:05:53.465 --rc geninfo_unexecuted_blocks=1 00:05:53.465 00:05:53.465 ' 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.465 --rc genhtml_branch_coverage=1 00:05:53.465 --rc genhtml_function_coverage=1 00:05:53.465 --rc genhtml_legend=1 00:05:53.465 --rc geninfo_all_blocks=1 00:05:53.465 --rc geninfo_unexecuted_blocks=1 00:05:53.465 00:05:53.465 ' 00:05:53.465 15:22:59 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.465 --rc genhtml_branch_coverage=1 00:05:53.465 --rc genhtml_function_coverage=1 00:05:53.465 --rc genhtml_legend=1 00:05:53.465 --rc geninfo_all_blocks=1 00:05:53.466 --rc geninfo_unexecuted_blocks=1 00:05:53.466 00:05:53.466 ' 00:05:53.466 15:22:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.466 15:22:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.466 15:22:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.466 15:22:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.466 15:22:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.466 15:22:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.466 15:22:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.724 ************************************ 00:05:53.724 START TEST default_locks 00:05:53.724 ************************************ 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2825348 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2825348 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2825348 ']' 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.724 15:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.724 [2024-12-06 15:22:59.541193] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:53.725 [2024-12-06 15:22:59.541229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825348 ] 00:05:53.725 [2024-12-06 15:22:59.611913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.725 [2024-12-06 15:22:59.651542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.659 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.659 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:54.659 15:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2825348 00:05:54.659 15:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2825348 00:05:54.659 15:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.919 lslocks: write error 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2825348 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2825348 ']' 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2825348 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825348 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825348' 00:05:54.919 killing process with pid 2825348 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2825348 00:05:54.919 15:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2825348 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2825348 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2825348 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2825348 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2825348 ']' 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2825348) - No such process 00:05:55.178 ERROR: process (pid: 2825348) is no longer running 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:55.178 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.179 00:05:55.179 real 0m1.668s 00:05:55.179 user 0m1.742s 00:05:55.179 sys 0m0.568s 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.179 15:23:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.179 ************************************ 00:05:55.179 END TEST default_locks 00:05:55.179 ************************************ 00:05:55.438 15:23:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.438 15:23:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.438 15:23:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.438 15:23:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.438 ************************************ 00:05:55.438 START TEST default_locks_via_rpc 00:05:55.438 ************************************ 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2825621 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2825621 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2825621 ']' 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.438 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.438 [2024-12-06 15:23:01.267079] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:55.438 [2024-12-06 15:23:01.267116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825621 ] 00:05:55.438 [2024-12-06 15:23:01.342348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.438 [2024-12-06 15:23:01.385120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.696 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.697 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.697 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.697 15:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.697 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2825621 00:05:55.697 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2825621 00:05:55.697 15:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2825621 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2825621 ']' 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2825621 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825621 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825621' 00:05:56.264 killing process with pid 2825621 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2825621 00:05:56.264 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2825621 00:05:56.524 00:05:56.524 real 0m1.216s 00:05:56.524 user 0m1.191s 00:05:56.524 sys 0m0.519s 00:05:56.524 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.524 15:23:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.524 ************************************ 00:05:56.524 END TEST default_locks_via_rpc 00:05:56.524 ************************************ 00:05:56.524 15:23:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.524 15:23:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.524 15:23:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.524 15:23:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.524 ************************************ 00:05:56.524 START TEST non_locking_app_on_locked_coremask 00:05:56.524 ************************************ 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2825876 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2825876 /var/tmp/spdk.sock 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2825876 ']' 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.524 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.783 [2024-12-06 15:23:02.560944] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:56.783 [2024-12-06 15:23:02.560981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825876 ] 00:05:56.783 [2024-12-06 15:23:02.633535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.783 [2024-12-06 15:23:02.673138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2825885 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2825885 /var/tmp/spdk2.sock 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2825885 ']' 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.043 15:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.043 [2024-12-06 15:23:02.945855] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:57.043 [2024-12-06 15:23:02.945901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825885 ] 00:05:57.043 [2024-12-06 15:23:03.031650] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.043 [2024-12-06 15:23:03.031671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.302 [2024-12-06 15:23:03.117391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.871 15:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.871 15:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.871 15:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2825876 00:05:57.871 15:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2825876 00:05:57.871 15:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.439 lslocks: write error 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2825876 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2825876 ']' 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2825876 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825876 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825876' 00:05:58.439 killing process with pid 2825876 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2825876 00:05:58.439 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2825876 00:05:59.007 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2825885 00:05:59.007 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2825885 ']' 00:05:59.007 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2825885 00:05:59.007 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.007 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.007 15:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825885 00:05:59.266 15:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.266 15:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.266 15:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825885' 00:05:59.266 killing process with pid 2825885 00:05:59.266 15:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2825885 00:05:59.266 15:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2825885 00:05:59.526 00:05:59.526 real 0m2.828s 00:05:59.526 user 0m2.961s 00:05:59.526 sys 0m0.952s 00:05:59.526 15:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.526 15:23:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.526 ************************************ 00:05:59.526 END TEST non_locking_app_on_locked_coremask 00:05:59.526 ************************************ 00:05:59.526 15:23:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.526 15:23:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.526 15:23:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.526 15:23:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.526 ************************************ 00:05:59.526 START TEST locking_app_on_unlocked_coremask 00:05:59.526 ************************************ 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2826376 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2826376 /var/tmp/spdk.sock 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2826376 ']' 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.526 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.526 [2024-12-06 15:23:05.431226] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:05:59.526 [2024-12-06 15:23:05.431266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826376 ] 00:05:59.526 [2024-12-06 15:23:05.504756] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.526 [2024-12-06 15:23:05.504781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.785 [2024-12-06 15:23:05.543096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2826379 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2826379 /var/tmp/spdk2.sock 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2826379 ']' 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.785 15:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.044 [2024-12-06 15:23:05.813699] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:00.044 [2024-12-06 15:23:05.813747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826379 ] 00:06:00.044 [2024-12-06 15:23:05.904751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.044 [2024-12-06 15:23:05.987649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.982 15:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.982 15:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.982 15:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2826379 00:06:00.982 15:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2826379 00:06:00.982 15:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.241 lslocks: write error 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2826376 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2826376 ']' 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2826376 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826376 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826376' 00:06:01.241 killing process with pid 2826376 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2826376 00:06:01.241 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2826376 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2826379 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2826379 ']' 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2826379 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826379 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826379' 00:06:01.811 killing process with pid 2826379 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2826379 00:06:01.811 15:23:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2826379 00:06:02.070 00:06:02.070 real 0m2.647s 00:06:02.070 user 0m2.815s 00:06:02.070 sys 0m0.861s 00:06:02.071 15:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.071 15:23:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.071 ************************************ 00:06:02.071 END TEST locking_app_on_unlocked_coremask 00:06:02.071 ************************************ 00:06:02.329 15:23:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.330 15:23:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.330 15:23:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.330 15:23:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.330 ************************************ 00:06:02.330 START TEST locking_app_on_locked_coremask 00:06:02.330 ************************************ 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2826874 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2826874 /var/tmp/spdk.sock 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2826874 ']' 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.330 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.330 [2024-12-06 15:23:08.166666] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:02.330 [2024-12-06 15:23:08.166713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826874 ] 00:06:02.330 [2024-12-06 15:23:08.239913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.330 [2024-12-06 15:23:08.280872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2826882 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2826882 /var/tmp/spdk2.sock 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2826882 /var/tmp/spdk2.sock 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2826882 /var/tmp/spdk2.sock 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2826882 ']' 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.589 15:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.589 [2024-12-06 15:23:08.554066] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:02.589 [2024-12-06 15:23:08.554114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826882 ] 00:06:02.847 [2024-12-06 15:23:08.642540] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2826874 has claimed it. 00:06:02.847 [2024-12-06 15:23:08.642575] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2826882) - No such process 00:06:03.412 ERROR: process (pid: 2826882) is no longer running 00:06:03.412 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.412 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:03.412 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:03.412 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.412 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.412 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.412 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2826874 00:06:03.413 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2826874 00:06:03.413 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.997 lslocks: write error 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2826874 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2826874 ']' 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2826874 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826874 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826874' 00:06:03.997 killing process with pid 2826874 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2826874 00:06:03.997 15:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2826874 00:06:04.256 00:06:04.256 real 0m1.929s 00:06:04.256 user 0m2.075s 00:06:04.256 sys 0m0.659s 00:06:04.256 15:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.256 15:23:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.256 ************************************ 00:06:04.256 END TEST locking_app_on_locked_coremask 00:06:04.256 ************************************ 00:06:04.256 15:23:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.256 15:23:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.256 15:23:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.256 15:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.256 ************************************ 00:06:04.256 START TEST locking_overlapped_coremask 00:06:04.256 ************************************ 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2827186 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2827186 /var/tmp/spdk.sock 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2827186 ']' 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.256 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.256 [2024-12-06 15:23:10.148801] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:04.256 [2024-12-06 15:23:10.148837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827186 ] 00:06:04.256 [2024-12-06 15:23:10.224125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.519 [2024-12-06 15:23:10.270186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.519 [2024-12-06 15:23:10.270298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.519 [2024-12-06 15:23:10.270298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2827365 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2827365 /var/tmp/spdk2.sock 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2827365 /var/tmp/spdk2.sock 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2827365 /var/tmp/spdk2.sock 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2827365 ']' 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.519 15:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 [2024-12-06 15:23:10.541221] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:04.779 [2024-12-06 15:23:10.541271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827365 ] 00:06:04.779 [2024-12-06 15:23:10.633192] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2827186 has claimed it. 00:06:04.779 [2024-12-06 15:23:10.633227] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2827365) - No such process 00:06:05.344 ERROR: process (pid: 2827365) is no longer running 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2827186 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2827186 ']' 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2827186 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827186 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827186' 00:06:05.344 killing process with pid 2827186 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2827186 00:06:05.344 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2827186 00:06:05.603 00:06:05.603 real 0m1.428s 00:06:05.603 user 0m3.954s 00:06:05.603 sys 0m0.378s 00:06:05.603 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.603 15:23:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.603 ************************************ 00:06:05.603 END TEST locking_overlapped_coremask 00:06:05.603 ************************************ 00:06:05.603 15:23:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:05.603 15:23:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.603 15:23:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.603 15:23:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.862 ************************************ 00:06:05.862 START TEST locking_overlapped_coremask_via_rpc 00:06:05.862 ************************************ 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2827520 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2827520 /var/tmp/spdk.sock 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2827520 ']' 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.862 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.862 [2024-12-06 15:23:11.654730] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:05.862 [2024-12-06 15:23:11.654771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827520 ] 00:06:05.862 [2024-12-06 15:23:11.720835] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.862 [2024-12-06 15:23:11.720869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.862 [2024-12-06 15:23:11.777148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.862 [2024-12-06 15:23:11.777261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.862 [2024-12-06 15:23:11.777263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2827634 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2827634 /var/tmp/spdk2.sock 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2827634 ']' 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.121 15:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.121 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.121 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.121 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.121 [2024-12-06 15:23:12.049288] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:06.121 [2024-12-06 15:23:12.049340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827634 ] 00:06:06.380 [2024-12-06 15:23:12.140868] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.380 [2024-12-06 15:23:12.140893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.380 [2024-12-06 15:23:12.228226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.380 [2024-12-06 15:23:12.228339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.380 [2024-12-06 15:23:12.228339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.947 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.948 [2024-12-06 15:23:12.894440] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2827520 has claimed it. 00:06:06.948 request: 00:06:06.948 { 00:06:06.948 "method": "framework_enable_cpumask_locks", 00:06:06.948 "req_id": 1 00:06:06.948 } 00:06:06.948 Got JSON-RPC error response 00:06:06.948 response: 00:06:06.948 { 00:06:06.948 "code": -32603, 00:06:06.948 "message": "Failed to claim CPU core: 2" 00:06:06.948 } 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2827520 /var/tmp/spdk.sock 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2827520 ']' 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.948 15:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.206 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.206 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.207 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2827634 /var/tmp/spdk2.sock 00:06:07.207 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2827634 ']' 00:06:07.207 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.207 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.207 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.207 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.207 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.465 00:06:07.465 real 0m1.721s 00:06:07.465 user 0m0.879s 00:06:07.465 sys 0m0.139s 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.465 15:23:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.465 ************************************ 00:06:07.465 END TEST locking_overlapped_coremask_via_rpc 00:06:07.465 ************************************ 00:06:07.465 15:23:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:07.465 15:23:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2827520 ]] 00:06:07.465 15:23:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2827520 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2827520 ']' 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2827520 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827520 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827520' 00:06:07.465 killing process with pid 2827520 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2827520 00:06:07.465 15:23:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2827520 00:06:07.724 15:23:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2827634 ]] 00:06:07.724 15:23:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2827634 00:06:07.724 15:23:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2827634 ']' 00:06:07.724 15:23:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2827634 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2827634 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2827634' 00:06:07.983 killing process with pid 2827634 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2827634 00:06:07.983 15:23:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2827634 00:06:08.243 15:23:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.243 15:23:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.243 15:23:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2827520 ]] 00:06:08.243 15:23:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2827520 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2827520 ']' 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2827520 00:06:08.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2827520) - No such process 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2827520 is not found' 00:06:08.243 Process with pid 2827520 is not found 00:06:08.243 15:23:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2827634 ]] 00:06:08.243 15:23:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2827634 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2827634 ']' 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2827634 00:06:08.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2827634) - No such process 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2827634 is not found' 00:06:08.243 Process with pid 2827634 is not found 00:06:08.243 15:23:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.243 00:06:08.243 real 0m14.802s 00:06:08.243 user 0m25.358s 00:06:08.243 sys 0m5.041s 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.243 15:23:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.243 ************************************ 00:06:08.243 END TEST cpu_locks 00:06:08.243 ************************************ 00:06:08.243 00:06:08.243 real 0m39.830s 00:06:08.243 user 1m15.779s 00:06:08.243 sys 0m8.680s 00:06:08.243 15:23:14 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.243 15:23:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.243 ************************************ 00:06:08.243 END TEST event 00:06:08.243 ************************************ 00:06:08.243 15:23:14 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:08.243 15:23:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.243 15:23:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.243 15:23:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.243 ************************************ 00:06:08.243 START TEST thread 00:06:08.243 ************************************ 00:06:08.243 15:23:14 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:08.503 * Looking for test storage... 00:06:08.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.503 15:23:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.503 15:23:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.503 15:23:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.503 15:23:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.503 15:23:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.503 15:23:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.503 15:23:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.503 15:23:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.503 15:23:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.503 15:23:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.503 15:23:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.503 15:23:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:08.503 15:23:14 thread -- scripts/common.sh@345 -- # : 1 00:06:08.503 15:23:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.503 15:23:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.503 15:23:14 thread -- scripts/common.sh@365 -- # decimal 1 00:06:08.503 15:23:14 thread -- scripts/common.sh@353 -- # local d=1 00:06:08.503 15:23:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.503 15:23:14 thread -- scripts/common.sh@355 -- # echo 1 00:06:08.503 15:23:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.503 15:23:14 thread -- scripts/common.sh@366 -- # decimal 2 00:06:08.503 15:23:14 thread -- scripts/common.sh@353 -- # local d=2 00:06:08.503 15:23:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.503 15:23:14 thread -- scripts/common.sh@355 -- # echo 2 00:06:08.503 15:23:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.503 15:23:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.503 15:23:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.503 15:23:14 thread -- scripts/common.sh@368 -- # return 0 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.503 --rc genhtml_branch_coverage=1 00:06:08.503 --rc genhtml_function_coverage=1 00:06:08.503 --rc genhtml_legend=1 00:06:08.503 --rc geninfo_all_blocks=1 00:06:08.503 --rc geninfo_unexecuted_blocks=1 00:06:08.503 00:06:08.503 ' 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.503 --rc genhtml_branch_coverage=1 00:06:08.503 --rc genhtml_function_coverage=1 00:06:08.503 --rc genhtml_legend=1 00:06:08.503 --rc geninfo_all_blocks=1 00:06:08.503 --rc geninfo_unexecuted_blocks=1 00:06:08.503 00:06:08.503 ' 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.503 --rc genhtml_branch_coverage=1 00:06:08.503 --rc genhtml_function_coverage=1 00:06:08.503 --rc genhtml_legend=1 00:06:08.503 --rc geninfo_all_blocks=1 00:06:08.503 --rc geninfo_unexecuted_blocks=1 00:06:08.503 00:06:08.503 ' 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.503 --rc genhtml_branch_coverage=1 00:06:08.503 --rc genhtml_function_coverage=1 00:06:08.503 --rc genhtml_legend=1 00:06:08.503 --rc geninfo_all_blocks=1 00:06:08.503 --rc geninfo_unexecuted_blocks=1 00:06:08.503 00:06:08.503 ' 00:06:08.503 15:23:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.503 15:23:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.503 ************************************ 00:06:08.503 START TEST thread_poller_perf 00:06:08.503 ************************************ 00:06:08.503 15:23:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.503 [2024-12-06 15:23:14.418451] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:08.503 [2024-12-06 15:23:14.418509] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828114 ] 00:06:08.503 [2024-12-06 15:23:14.492721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.763 [2024-12-06 15:23:14.533148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.764 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:09.700 [2024-12-06T14:23:15.698Z] ====================================== 00:06:09.700 [2024-12-06T14:23:15.698Z] busy:2109081324 (cyc) 00:06:09.700 [2024-12-06T14:23:15.698Z] total_run_count: 425000 00:06:09.700 [2024-12-06T14:23:15.698Z] tsc_hz: 2100000000 (cyc) 00:06:09.700 [2024-12-06T14:23:15.698Z] ====================================== 00:06:09.700 [2024-12-06T14:23:15.698Z] poller_cost: 4962 (cyc), 2362 (nsec) 00:06:09.700 00:06:09.700 real 0m1.181s 00:06:09.700 user 0m1.109s 00:06:09.700 sys 0m0.069s 00:06:09.700 15:23:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.700 15:23:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.700 ************************************ 00:06:09.700 END TEST thread_poller_perf 00:06:09.700 ************************************ 00:06:09.700 15:23:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.700 15:23:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.700 15:23:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.700 15:23:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.700 ************************************ 00:06:09.700 START TEST thread_poller_perf 00:06:09.700 ************************************ 00:06:09.700 15:23:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.700 [2024-12-06 15:23:15.668913] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:09.700 [2024-12-06 15:23:15.668986] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828283 ] 00:06:09.958 [2024-12-06 15:23:15.745099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.958 [2024-12-06 15:23:15.784522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.958 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:10.893 [2024-12-06T14:23:16.891Z] ====================================== 00:06:10.893 [2024-12-06T14:23:16.891Z] busy:2101513864 (cyc) 00:06:10.893 [2024-12-06T14:23:16.891Z] total_run_count: 5263000 00:06:10.893 [2024-12-06T14:23:16.891Z] tsc_hz: 2100000000 (cyc) 00:06:10.893 [2024-12-06T14:23:16.891Z] ====================================== 00:06:10.893 [2024-12-06T14:23:16.891Z] poller_cost: 399 (cyc), 190 (nsec) 00:06:10.893 00:06:10.893 real 0m1.174s 00:06:10.893 user 0m1.100s 00:06:10.893 sys 0m0.070s 00:06:10.893 15:23:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.893 15:23:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.893 ************************************ 00:06:10.893 END TEST thread_poller_perf 00:06:10.893 ************************************ 00:06:10.893 15:23:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:10.893 00:06:10.893 real 0m2.667s 00:06:10.893 user 0m2.372s 00:06:10.893 sys 0m0.311s 00:06:10.893 15:23:16 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.893 15:23:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.893 ************************************ 00:06:10.893 END TEST thread 00:06:10.893 ************************************ 00:06:11.160 15:23:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:11.160 15:23:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:11.160 15:23:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.160 15:23:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.160 15:23:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.160 ************************************ 00:06:11.160 START TEST app_cmdline 00:06:11.160 ************************************ 00:06:11.160 15:23:16 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:11.160 * Looking for test storage... 00:06:11.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.160 15:23:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.160 --rc genhtml_branch_coverage=1 00:06:11.160 --rc genhtml_function_coverage=1 00:06:11.160 --rc genhtml_legend=1 00:06:11.160 --rc geninfo_all_blocks=1 00:06:11.160 --rc geninfo_unexecuted_blocks=1 00:06:11.160 00:06:11.160 ' 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.160 --rc genhtml_branch_coverage=1 00:06:11.160 --rc genhtml_function_coverage=1 00:06:11.160 --rc genhtml_legend=1 00:06:11.160 --rc geninfo_all_blocks=1 00:06:11.160 --rc geninfo_unexecuted_blocks=1 00:06:11.160 00:06:11.160 ' 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.160 --rc genhtml_branch_coverage=1 00:06:11.160 --rc genhtml_function_coverage=1 00:06:11.160 --rc genhtml_legend=1 00:06:11.160 --rc geninfo_all_blocks=1 00:06:11.160 --rc geninfo_unexecuted_blocks=1 00:06:11.160 00:06:11.160 ' 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.160 --rc genhtml_branch_coverage=1 00:06:11.160 --rc genhtml_function_coverage=1 00:06:11.160 --rc genhtml_legend=1 00:06:11.160 --rc geninfo_all_blocks=1 00:06:11.160 --rc geninfo_unexecuted_blocks=1 00:06:11.160 00:06:11.160 ' 00:06:11.160 15:23:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:11.160 15:23:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2828618 00:06:11.160 15:23:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2828618 00:06:11.160 15:23:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2828618 ']' 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.160 15:23:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.493 [2024-12-06 15:23:17.158089] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:11.493 [2024-12-06 15:23:17.158138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828618 ] 00:06:11.493 [2024-12-06 15:23:17.215190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.493 [2024-12-06 15:23:17.257584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.789 15:23:17 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.789 15:23:17 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:11.789 15:23:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:11.789 { 00:06:11.789 "version": "SPDK v25.01-pre git sha1 562857cff", 00:06:11.789 "fields": { 00:06:11.789 "major": 25, 00:06:11.789 "minor": 1, 00:06:11.789 "patch": 0, 00:06:11.789 "suffix": "-pre", 00:06:11.789 "commit": "562857cff" 00:06:11.789 } 00:06:11.789 } 00:06:11.789 15:23:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.790 15:23:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:11.790 15:23:17 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:12.049 request: 00:06:12.049 { 00:06:12.049 "method": "env_dpdk_get_mem_stats", 00:06:12.049 "req_id": 1 00:06:12.049 } 00:06:12.049 Got JSON-RPC error response 00:06:12.049 response: 00:06:12.049 { 00:06:12.049 "code": -32601, 00:06:12.049 "message": "Method not found" 00:06:12.049 } 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.049 15:23:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2828618 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2828618 ']' 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2828618 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828618 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828618' 00:06:12.049 killing process with pid 2828618 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@973 -- # kill 2828618 00:06:12.049 15:23:17 app_cmdline -- common/autotest_common.sh@978 -- # wait 2828618 00:06:12.309 00:06:12.309 real 0m1.321s 00:06:12.309 user 0m1.533s 00:06:12.309 sys 0m0.448s 00:06:12.309 15:23:18 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.309 15:23:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.309 ************************************ 00:06:12.309 END TEST app_cmdline 00:06:12.309 ************************************ 00:06:12.309 15:23:18 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.309 15:23:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.309 15:23:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.309 15:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.568 ************************************ 00:06:12.568 START TEST version 00:06:12.568 ************************************ 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.568 * Looking for test storage... 00:06:12.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.568 15:23:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.568 15:23:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.568 15:23:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.568 15:23:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.568 15:23:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.568 15:23:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.568 15:23:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.568 15:23:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.568 15:23:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.568 15:23:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.568 15:23:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.568 15:23:18 version -- scripts/common.sh@344 -- # case "$op" in 00:06:12.568 15:23:18 version -- scripts/common.sh@345 -- # : 1 00:06:12.568 15:23:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.568 15:23:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.568 15:23:18 version -- scripts/common.sh@365 -- # decimal 1 00:06:12.568 15:23:18 version -- scripts/common.sh@353 -- # local d=1 00:06:12.568 15:23:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.568 15:23:18 version -- scripts/common.sh@355 -- # echo 1 00:06:12.568 15:23:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.568 15:23:18 version -- scripts/common.sh@366 -- # decimal 2 00:06:12.568 15:23:18 version -- scripts/common.sh@353 -- # local d=2 00:06:12.568 15:23:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.568 15:23:18 version -- scripts/common.sh@355 -- # echo 2 00:06:12.568 15:23:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.568 15:23:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.568 15:23:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.568 15:23:18 version -- scripts/common.sh@368 -- # return 0 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.568 --rc genhtml_branch_coverage=1 00:06:12.568 --rc genhtml_function_coverage=1 00:06:12.568 --rc genhtml_legend=1 00:06:12.568 --rc geninfo_all_blocks=1 00:06:12.568 --rc geninfo_unexecuted_blocks=1 00:06:12.568 00:06:12.568 ' 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.568 --rc genhtml_branch_coverage=1 00:06:12.568 --rc genhtml_function_coverage=1 00:06:12.568 --rc genhtml_legend=1 00:06:12.568 --rc geninfo_all_blocks=1 00:06:12.568 --rc geninfo_unexecuted_blocks=1 00:06:12.568 00:06:12.568 ' 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.568 --rc genhtml_branch_coverage=1 00:06:12.568 --rc genhtml_function_coverage=1 00:06:12.568 --rc genhtml_legend=1 00:06:12.568 --rc geninfo_all_blocks=1 00:06:12.568 --rc geninfo_unexecuted_blocks=1 00:06:12.568 00:06:12.568 ' 00:06:12.568 15:23:18 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.568 --rc genhtml_branch_coverage=1 00:06:12.568 --rc genhtml_function_coverage=1 00:06:12.568 --rc genhtml_legend=1 00:06:12.568 --rc geninfo_all_blocks=1 00:06:12.568 --rc geninfo_unexecuted_blocks=1 00:06:12.568 00:06:12.568 ' 00:06:12.568 15:23:18 version -- app/version.sh@17 -- # get_header_version major 00:06:12.568 15:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.568 15:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:12.568 15:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.568 15:23:18 version -- app/version.sh@17 -- # major=25 00:06:12.568 15:23:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.568 15:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.568 15:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:12.569 15:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.569 15:23:18 version -- app/version.sh@18 -- # minor=1 00:06:12.569 15:23:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.569 15:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.569 15:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:12.569 15:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.569 15:23:18 version -- app/version.sh@19 -- # patch=0 00:06:12.569 15:23:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.569 15:23:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.569 15:23:18 version -- app/version.sh@14 -- # cut -f2 00:06:12.569 15:23:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.569 15:23:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.569 15:23:18 version -- app/version.sh@22 -- # version=25.1 00:06:12.569 15:23:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.569 15:23:18 version -- app/version.sh@28 -- # version=25.1rc0 00:06:12.569 15:23:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:12.569 15:23:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.569 15:23:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:12.569 15:23:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:12.569 00:06:12.569 real 0m0.243s 00:06:12.569 user 0m0.150s 00:06:12.569 sys 0m0.138s 00:06:12.569 15:23:18 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.569 15:23:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:12.569 ************************************ 00:06:12.569 END TEST version 00:06:12.569 ************************************ 00:06:12.827 15:23:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:12.827 15:23:18 -- spdk/autotest.sh@194 -- # uname -s 00:06:12.827 15:23:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:12.827 15:23:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.827 15:23:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.827 15:23:18 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:12.827 15:23:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.827 15:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.827 15:23:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:12.827 15:23:18 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:12.827 15:23:18 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.827 15:23:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.827 15:23:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.827 15:23:18 -- common/autotest_common.sh@10 -- # set +x 00:06:12.827 ************************************ 00:06:12.827 START TEST nvmf_tcp 00:06:12.827 ************************************ 00:06:12.827 15:23:18 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.827 * Looking for test storage... 00:06:12.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:12.827 15:23:18 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.827 15:23:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.827 15:23:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.085 15:23:18 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.085 15:23:18 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.086 15:23:18 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:13.086 15:23:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:13.086 15:23:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.086 15:23:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.086 ************************************ 00:06:13.086 START TEST nvmf_target_core 00:06:13.086 ************************************ 00:06:13.086 15:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:13.086 * Looking for test storage... 00:06:13.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:13.086 15:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.086 15:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.086 15:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.086 --rc genhtml_branch_coverage=1 00:06:13.086 --rc genhtml_function_coverage=1 00:06:13.086 --rc genhtml_legend=1 00:06:13.086 --rc geninfo_all_blocks=1 00:06:13.086 --rc geninfo_unexecuted_blocks=1 00:06:13.086 00:06:13.086 ' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.086 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.346 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:13.347 ************************************ 00:06:13.347 START TEST nvmf_abort 00:06:13.347 ************************************ 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.347 * Looking for test storage... 00:06:13.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.347 --rc genhtml_branch_coverage=1 00:06:13.347 --rc genhtml_function_coverage=1 00:06:13.347 --rc genhtml_legend=1 00:06:13.347 --rc geninfo_all_blocks=1 00:06:13.347 --rc geninfo_unexecuted_blocks=1 00:06:13.347 00:06:13.347 ' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.347 --rc genhtml_branch_coverage=1 00:06:13.347 --rc genhtml_function_coverage=1 00:06:13.347 --rc genhtml_legend=1 00:06:13.347 --rc geninfo_all_blocks=1 00:06:13.347 --rc geninfo_unexecuted_blocks=1 00:06:13.347 00:06:13.347 ' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.347 --rc genhtml_branch_coverage=1 00:06:13.347 --rc genhtml_function_coverage=1 00:06:13.347 --rc genhtml_legend=1 00:06:13.347 --rc geninfo_all_blocks=1 00:06:13.347 --rc geninfo_unexecuted_blocks=1 00:06:13.347 00:06:13.347 ' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.347 --rc genhtml_branch_coverage=1 00:06:13.347 --rc genhtml_function_coverage=1 00:06:13.347 --rc genhtml_legend=1 00:06:13.347 --rc geninfo_all_blocks=1 00:06:13.347 --rc geninfo_unexecuted_blocks=1 00:06:13.347 00:06:13.347 ' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.347 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.348 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.607 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:13.607 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:13.607 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:13.607 15:23:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:20.172 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:20.172 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:20.172 Found net devices under 0000:86:00.0: cvl_0_0 00:06:20.172 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:20.173 Found net devices under 0000:86:00.1: cvl_0_1 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:20.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:06:20.173 00:06:20.173 --- 10.0.0.2 ping statistics --- 00:06:20.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.173 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:06:20.173 00:06:20.173 --- 10.0.0.1 ping statistics --- 00:06:20.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.173 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2832227 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2832227 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2832227 ']' 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.173 15:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.173 [2024-12-06 15:23:25.418121] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:20.173 [2024-12-06 15:23:25.418170] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.173 [2024-12-06 15:23:25.498080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.173 [2024-12-06 15:23:25.541302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.173 [2024-12-06 15:23:25.541340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.173 [2024-12-06 15:23:25.541347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.173 [2024-12-06 15:23:25.541353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.173 [2024-12-06 15:23:25.541358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.173 [2024-12-06 15:23:25.542744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.173 [2024-12-06 15:23:25.542851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.173 [2024-12-06 15:23:25.542852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 [2024-12-06 15:23:26.298332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 Malloc0 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 Delay0 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 [2024-12-06 15:23:26.378071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.432 15:23:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:20.690 [2024-12-06 15:23:26.556443] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:23.223 Initializing NVMe Controllers 00:06:23.223 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:23.223 controller IO queue size 128 less than required 00:06:23.223 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:23.223 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:23.223 Initialization complete. Launching workers. 00:06:23.223 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37356 00:06:23.223 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37417, failed to submit 62 00:06:23.223 success 37360, unsuccessful 57, failed 0 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:23.223 rmmod nvme_tcp 00:06:23.223 rmmod nvme_fabrics 00:06:23.223 rmmod nvme_keyring 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2832227 ']' 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2832227 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2832227 ']' 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2832227 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832227 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832227' 00:06:23.223 killing process with pid 2832227 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2832227 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2832227 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:23.223 15:23:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:23.223 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:23.223 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:23.223 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.223 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.223 15:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.129 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:25.129 00:06:25.129 real 0m11.934s 00:06:25.129 user 0m14.022s 00:06:25.129 sys 0m5.428s 00:06:25.129 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.129 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.129 ************************************ 00:06:25.129 END TEST nvmf_abort 00:06:25.129 ************************************ 00:06:25.129 15:23:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:25.129 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.129 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.129 15:23:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.390 ************************************ 00:06:25.390 START TEST nvmf_ns_hotplug_stress 00:06:25.390 ************************************ 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:25.390 * Looking for test storage... 00:06:25.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.390 --rc genhtml_branch_coverage=1 00:06:25.390 --rc genhtml_function_coverage=1 00:06:25.390 --rc genhtml_legend=1 00:06:25.390 --rc geninfo_all_blocks=1 00:06:25.390 --rc geninfo_unexecuted_blocks=1 00:06:25.390 00:06:25.390 ' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.390 --rc genhtml_branch_coverage=1 00:06:25.390 --rc genhtml_function_coverage=1 00:06:25.390 --rc genhtml_legend=1 00:06:25.390 --rc geninfo_all_blocks=1 00:06:25.390 --rc geninfo_unexecuted_blocks=1 00:06:25.390 00:06:25.390 ' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.390 --rc genhtml_branch_coverage=1 00:06:25.390 --rc genhtml_function_coverage=1 00:06:25.390 --rc genhtml_legend=1 00:06:25.390 --rc geninfo_all_blocks=1 00:06:25.390 --rc geninfo_unexecuted_blocks=1 00:06:25.390 00:06:25.390 ' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.390 --rc genhtml_branch_coverage=1 00:06:25.390 --rc genhtml_function_coverage=1 00:06:25.390 --rc genhtml_legend=1 00:06:25.390 --rc geninfo_all_blocks=1 00:06:25.390 --rc geninfo_unexecuted_blocks=1 00:06:25.390 00:06:25.390 ' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.390 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.391 15:23:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.958 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:31.959 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:31.959 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:31.959 Found net devices under 0000:86:00.0: cvl_0_0 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:31.959 Found net devices under 0000:86:00.1: cvl_0_1 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.959 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:06:31.960 00:06:31.960 --- 10.0.0.2 ping statistics --- 00:06:31.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.960 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:06:31.960 00:06:31.960 --- 10.0.0.1 ping statistics --- 00:06:31.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.960 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2836462 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2836462 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2836462 ']' 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.960 [2024-12-06 15:23:37.400348] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:06:31.960 [2024-12-06 15:23:37.400406] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.960 [2024-12-06 15:23:37.478194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.960 [2024-12-06 15:23:37.519539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.960 [2024-12-06 15:23:37.519572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.960 [2024-12-06 15:23:37.519579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.960 [2024-12-06 15:23:37.519585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.960 [2024-12-06 15:23:37.519590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.960 [2024-12-06 15:23:37.520928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.960 [2024-12-06 15:23:37.521034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.960 [2024-12-06 15:23:37.521035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:31.960 [2024-12-06 15:23:37.823009] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.960 15:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:32.218 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.476 [2024-12-06 15:23:38.248487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.476 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:32.476 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:32.735 Malloc0 00:06:32.735 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.993 Delay0 00:06:32.993 15:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.251 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:33.251 NULL1 00:06:33.251 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:33.510 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2836849 00:06:33.510 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:33.510 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:33.510 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.768 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.026 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:34.026 15:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:34.284 true 00:06:34.284 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:34.284 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.543 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.543 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:34.543 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:34.801 true 00:06:34.801 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:34.801 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.060 15:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.318 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:35.318 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:35.576 true 00:06:35.576 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:35.576 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.576 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.834 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:35.834 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:36.091 true 00:06:36.091 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:36.091 15:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.349 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.607 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:36.607 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:36.865 true 00:06:36.865 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:36.865 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.865 15:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.124 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:37.124 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:37.382 true 00:06:37.382 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:37.382 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.641 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.899 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:37.899 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:37.899 true 00:06:38.157 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:38.157 15:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.158 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.416 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:38.416 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:38.674 true 00:06:38.674 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:38.674 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.932 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.190 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:39.190 15:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:39.190 true 00:06:39.190 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:39.190 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.447 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.704 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:39.704 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:39.962 true 00:06:39.962 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:39.962 15:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.220 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.478 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:40.478 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:40.478 true 00:06:40.478 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:40.478 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.736 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.993 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:40.993 15:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:41.251 true 00:06:41.251 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:41.251 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.507 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.507 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:41.507 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:41.765 true 00:06:41.765 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:41.765 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.023 15:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.279 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:42.279 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:42.536 true 00:06:42.536 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:42.536 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.794 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.794 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:42.794 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:43.051 true 00:06:43.051 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:43.051 15:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.308 15:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.566 15:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:43.566 15:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:43.822 true 00:06:43.822 15:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:43.822 15:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.823 15:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.080 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:44.080 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:44.337 true 00:06:44.337 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:44.337 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.594 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.852 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:44.852 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:45.109 true 00:06:45.109 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:45.109 15:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.109 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.367 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:45.367 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:45.625 true 00:06:45.625 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:45.625 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.882 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.140 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:46.140 15:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:46.398 true 00:06:46.398 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:46.398 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.398 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.657 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:46.657 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:46.915 true 00:06:46.915 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:46.915 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.173 15:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.431 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:47.431 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:47.431 true 00:06:47.431 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:47.431 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.690 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.948 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:47.948 15:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:48.207 true 00:06:48.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:48.207 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.466 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.466 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:48.466 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:48.723 true 00:06:48.723 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:48.723 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.980 15:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.236 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:49.236 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:49.493 true 00:06:49.493 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:49.493 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.752 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.752 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:49.752 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:50.010 true 00:06:50.010 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:50.010 15:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.268 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.526 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:50.526 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:50.785 true 00:06:50.785 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:50.785 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.785 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.043 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:51.043 15:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:51.301 true 00:06:51.301 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:51.301 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.558 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.815 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:51.815 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:52.073 true 00:06:52.073 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:52.073 15:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.073 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.331 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:52.331 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:52.589 true 00:06:52.589 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:52.589 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.848 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.106 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:53.106 15:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:53.106 true 00:06:53.106 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:53.106 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.365 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.622 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:53.622 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:53.881 true 00:06:53.881 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:53.881 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.139 15:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.409 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:54.409 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:54.409 true 00:06:54.409 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:54.409 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.668 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.926 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:54.926 15:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:55.185 true 00:06:55.185 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:55.185 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.442 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.700 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:55.700 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:55.700 true 00:06:55.700 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:55.700 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.958 15:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.215 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:56.215 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:56.526 true 00:06:56.526 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:56.526 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.526 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.786 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:56.786 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:57.044 true 00:06:57.044 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:57.044 15:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.351 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.660 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:57.660 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:57.660 true 00:06:57.660 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:57.660 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.001 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.001 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:58.001 15:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:58.258 true 00:06:58.258 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:58.258 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.517 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.776 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:58.776 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:59.035 true 00:06:59.035 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:59.035 15:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.297 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.297 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:59.297 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:59.555 true 00:06:59.555 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:06:59.555 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.813 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.071 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:00.071 15:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:00.329 true 00:07:00.329 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:07:00.329 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.329 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.587 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:00.587 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:00.847 true 00:07:00.847 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:07:00.847 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.106 15:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.365 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:01.365 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:01.365 true 00:07:01.624 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:07:01.624 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.624 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.883 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:01.883 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:02.142 true 00:07:02.142 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:07:02.142 15:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.401 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.661 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:02.661 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:02.661 true 00:07:02.920 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:07:02.920 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.920 15:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.179 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:03.179 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:03.438 true 00:07:03.438 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:07:03.438 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.697 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.956 Initializing NVMe Controllers 00:07:03.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:03.956 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:07:03.956 Controller IO queue size 128, less than required. 00:07:03.956 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.956 WARNING: Some requested NVMe devices were skipped 00:07:03.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:03.956 Initialization complete. Launching workers. 00:07:03.956 ======================================================== 00:07:03.956 Latency(us) 00:07:03.956 Device Information : IOPS MiB/s Average min max 00:07:03.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27307.07 13.33 4687.33 2159.01 43219.40 00:07:03.956 ======================================================== 00:07:03.956 Total : 27307.07 13.33 4687.33 2159.01 43219.40 00:07:03.956 00:07:03.956 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:03.956 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:03.956 true 00:07:04.215 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2836849 00:07:04.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2836849) - No such process 00:07:04.215 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2836849 00:07:04.215 15:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.215 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.474 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:04.474 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:04.474 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:04.474 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.475 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:04.733 null0 00:07:04.733 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.733 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.733 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:04.733 null1 00:07:04.991 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.992 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.992 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:04.992 null2 00:07:04.992 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.992 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.992 15:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:05.250 null3 00:07:05.250 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.250 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.250 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:05.508 null4 00:07:05.508 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.508 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.509 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:05.767 null5 00:07:05.767 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.767 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.767 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:05.767 null6 00:07:05.768 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.768 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.768 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:06.027 null7 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.027 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2842909 2842911 2842912 2842914 2842916 2842918 2842920 2842922 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.028 15:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.287 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.547 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.807 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.066 15:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.326 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.586 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.845 15:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.104 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.363 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.621 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.880 15:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.138 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.397 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.655 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.914 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.915 15:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:10.173 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.174 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.174 rmmod nvme_tcp 00:07:10.174 rmmod nvme_fabrics 00:07:10.174 rmmod nvme_keyring 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2836462 ']' 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2836462 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2836462 ']' 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2836462 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836462 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836462' 00:07:10.432 killing process with pid 2836462 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2836462 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2836462 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.432 15:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:12.968 00:07:12.968 real 0m47.340s 00:07:12.968 user 3m20.966s 00:07:12.968 sys 0m16.912s 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:12.968 ************************************ 00:07:12.968 END TEST nvmf_ns_hotplug_stress 00:07:12.968 ************************************ 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.968 ************************************ 00:07:12.968 START TEST nvmf_delete_subsystem 00:07:12.968 ************************************ 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:12.968 * Looking for test storage... 00:07:12.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.968 --rc genhtml_branch_coverage=1 00:07:12.968 --rc genhtml_function_coverage=1 00:07:12.968 --rc genhtml_legend=1 00:07:12.968 --rc geninfo_all_blocks=1 00:07:12.968 --rc geninfo_unexecuted_blocks=1 00:07:12.968 00:07:12.968 ' 00:07:12.968 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.969 --rc genhtml_branch_coverage=1 00:07:12.969 --rc genhtml_function_coverage=1 00:07:12.969 --rc genhtml_legend=1 00:07:12.969 --rc geninfo_all_blocks=1 00:07:12.969 --rc geninfo_unexecuted_blocks=1 00:07:12.969 00:07:12.969 ' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.969 --rc genhtml_branch_coverage=1 00:07:12.969 --rc genhtml_function_coverage=1 00:07:12.969 --rc genhtml_legend=1 00:07:12.969 --rc geninfo_all_blocks=1 00:07:12.969 --rc geninfo_unexecuted_blocks=1 00:07:12.969 00:07:12.969 ' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.969 --rc genhtml_branch_coverage=1 00:07:12.969 --rc genhtml_function_coverage=1 00:07:12.969 --rc genhtml_legend=1 00:07:12.969 --rc geninfo_all_blocks=1 00:07:12.969 --rc geninfo_unexecuted_blocks=1 00:07:12.969 00:07:12.969 ' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.969 15:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:19.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:19.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:19.545 Found net devices under 0000:86:00.0: cvl_0_0 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:19.545 Found net devices under 0000:86:00.1: cvl_0_1 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.545 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:19.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:07:19.546 00:07:19.546 --- 10.0.0.2 ping statistics --- 00:07:19.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.546 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:07:19.546 00:07:19.546 --- 10.0.0.1 ping statistics --- 00:07:19.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.546 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2847440 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2847440 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2847440 ']' 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.546 15:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 [2024-12-06 15:24:24.806477] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:07:19.546 [2024-12-06 15:24:24.806529] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.546 [2024-12-06 15:24:24.885078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.546 [2024-12-06 15:24:24.924800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.546 [2024-12-06 15:24:24.924853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.546 [2024-12-06 15:24:24.924862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.546 [2024-12-06 15:24:24.924868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.546 [2024-12-06 15:24:24.924874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.546 [2024-12-06 15:24:24.926108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.546 [2024-12-06 15:24:24.926109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 [2024-12-06 15:24:25.071252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 [2024-12-06 15:24:25.091488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 NULL1 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 Delay0 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2847562 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:19.546 15:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:19.546 [2024-12-06 15:24:25.203310] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:21.453 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.453 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.453 15:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 starting I/O failed: -6 00:07:21.453 Write completed with error (sct=0, sc=8) 00:07:21.453 Read completed with error (sct=0, sc=8) 00:07:21.453 [2024-12-06 15:24:27.332190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7610000c40 is same with the state(6) to be set 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Read completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:21.454 Write completed with error (sct=0, sc=8) 00:07:22.394 [2024-12-06 15:24:28.300155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a9b0 is same with the state(6) to be set 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 [2024-12-06 15:24:28.333855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b592c0 is same with the state(6) to be set 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 [2024-12-06 15:24:28.334607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f761000d7e0 is same with the state(6) to be set 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 [2024-12-06 15:24:28.334751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f761000d020 is same with the state(6) to be set 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Write completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 Read completed with error (sct=0, sc=8) 00:07:22.394 [2024-12-06 15:24:28.335741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b59680 is same with the state(6) to be set 00:07:22.394 Initializing NVMe Controllers 00:07:22.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.394 Controller IO queue size 128, less than required. 00:07:22.394 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:22.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:22.394 Initialization complete. Launching workers. 00:07:22.394 ======================================================== 00:07:22.394 Latency(us) 00:07:22.394 Device Information : IOPS MiB/s Average min max 00:07:22.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.84 0.09 900737.07 395.52 1009777.74 00:07:22.394 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.97 0.08 950763.24 284.20 2001799.74 00:07:22.394 ======================================================== 00:07:22.394 Total : 354.81 0.17 924278.80 284.20 2001799.74 00:07:22.394 00:07:22.394 [2024-12-06 15:24:28.336068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5a9b0 (9): Bad file descriptor 00:07:22.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:22.394 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.394 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:22.394 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2847562 00:07:22.394 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:22.961 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:22.961 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2847562 00:07:22.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2847562) - No such process 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2847562 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2847562 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2847562 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.962 [2024-12-06 15:24:28.861095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2848150 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:22.962 15:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.962 [2024-12-06 15:24:28.957259] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:23.528 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.528 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:23.529 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.094 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.094 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:24.094 15:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.659 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.659 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:24.659 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.916 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.916 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:24.916 15:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.479 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.479 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:25.479 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.043 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.043 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:26.043 15:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.301 Initializing NVMe Controllers 00:07:26.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.301 Controller IO queue size 128, less than required. 00:07:26.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:26.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:26.301 Initialization complete. Launching workers. 00:07:26.301 ======================================================== 00:07:26.301 Latency(us) 00:07:26.301 Device Information : IOPS MiB/s Average min max 00:07:26.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002625.40 1000151.16 1042804.88 00:07:26.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003975.61 1000117.29 1011298.57 00:07:26.301 ======================================================== 00:07:26.301 Total : 256.00 0.12 1003300.51 1000117.29 1042804.88 00:07:26.301 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2848150 00:07:26.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2848150) - No such process 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2848150 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.559 rmmod nvme_tcp 00:07:26.559 rmmod nvme_fabrics 00:07:26.559 rmmod nvme_keyring 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2847440 ']' 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2847440 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2847440 ']' 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2847440 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2847440 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2847440' 00:07:26.559 killing process with pid 2847440 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2847440 00:07:26.559 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2847440 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.818 15:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.354 00:07:29.354 real 0m16.197s 00:07:29.354 user 0m29.267s 00:07:29.354 sys 0m5.520s 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.354 ************************************ 00:07:29.354 END TEST nvmf_delete_subsystem 00:07:29.354 ************************************ 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:29.354 ************************************ 00:07:29.354 START TEST nvmf_host_management 00:07:29.354 ************************************ 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:29.354 * Looking for test storage... 00:07:29.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:29.354 15:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:29.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.354 --rc genhtml_branch_coverage=1 00:07:29.354 --rc genhtml_function_coverage=1 00:07:29.354 --rc genhtml_legend=1 00:07:29.354 --rc geninfo_all_blocks=1 00:07:29.354 --rc geninfo_unexecuted_blocks=1 00:07:29.354 00:07:29.354 ' 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:29.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.354 --rc genhtml_branch_coverage=1 00:07:29.354 --rc genhtml_function_coverage=1 00:07:29.354 --rc genhtml_legend=1 00:07:29.354 --rc geninfo_all_blocks=1 00:07:29.354 --rc geninfo_unexecuted_blocks=1 00:07:29.354 00:07:29.354 ' 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:29.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.354 --rc genhtml_branch_coverage=1 00:07:29.354 --rc genhtml_function_coverage=1 00:07:29.354 --rc genhtml_legend=1 00:07:29.354 --rc geninfo_all_blocks=1 00:07:29.354 --rc geninfo_unexecuted_blocks=1 00:07:29.354 00:07:29.354 ' 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:29.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.354 --rc genhtml_branch_coverage=1 00:07:29.354 --rc genhtml_function_coverage=1 00:07:29.354 --rc genhtml_legend=1 00:07:29.354 --rc geninfo_all_blocks=1 00:07:29.354 --rc geninfo_unexecuted_blocks=1 00:07:29.354 00:07:29.354 ' 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.354 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:29.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:29.355 15:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.951 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.951 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.951 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.951 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.952 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.952 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.952 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.952 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.952 15:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:07:35.952 00:07:35.952 --- 10.0.0.2 ping statistics --- 00:07:35.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.952 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:35.952 00:07:35.952 --- 10.0.0.1 ping statistics --- 00:07:35.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.952 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2852267 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2852267 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2852267 ']' 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.952 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.952 [2024-12-06 15:24:41.140967] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:07:35.952 [2024-12-06 15:24:41.141009] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.952 [2024-12-06 15:24:41.219577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.952 [2024-12-06 15:24:41.262728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.952 [2024-12-06 15:24:41.262761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.952 [2024-12-06 15:24:41.262769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.952 [2024-12-06 15:24:41.262775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.952 [2024-12-06 15:24:41.262780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.952 [2024-12-06 15:24:41.264316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.952 [2024-12-06 15:24:41.264435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.952 [2024-12-06 15:24:41.264476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.952 [2024-12-06 15:24:41.264477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.212 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.212 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:36.212 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.212 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.212 15:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.212 [2024-12-06 15:24:42.012315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.212 Malloc0 00:07:36.212 [2024-12-06 15:24:42.094661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2852535 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2852535 /var/tmp/bdevperf.sock 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2852535 ']' 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:36.212 { 00:07:36.212 "params": { 00:07:36.212 "name": "Nvme$subsystem", 00:07:36.212 "trtype": "$TEST_TRANSPORT", 00:07:36.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:36.212 "adrfam": "ipv4", 00:07:36.212 "trsvcid": "$NVMF_PORT", 00:07:36.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:36.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:36.212 "hdgst": ${hdgst:-false}, 00:07:36.212 "ddgst": ${ddgst:-false} 00:07:36.212 }, 00:07:36.212 "method": "bdev_nvme_attach_controller" 00:07:36.212 } 00:07:36.212 EOF 00:07:36.212 )") 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:36.212 15:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:36.212 "params": { 00:07:36.212 "name": "Nvme0", 00:07:36.212 "trtype": "tcp", 00:07:36.212 "traddr": "10.0.0.2", 00:07:36.212 "adrfam": "ipv4", 00:07:36.212 "trsvcid": "4420", 00:07:36.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.212 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:36.212 "hdgst": false, 00:07:36.212 "ddgst": false 00:07:36.212 }, 00:07:36.212 "method": "bdev_nvme_attach_controller" 00:07:36.212 }' 00:07:36.212 [2024-12-06 15:24:42.191776] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:07:36.212 [2024-12-06 15:24:42.191820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852535 ] 00:07:36.471 [2024-12-06 15:24:42.265593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.471 [2024-12-06 15:24:42.306474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.730 Running I/O for 10 seconds... 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1102 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1102 -ge 100 ']' 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.299 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.299 [2024-12-06 15:24:43.117576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.299 [2024-12-06 15:24:43.117615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.299 [2024-12-06 15:24:43.117632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.299 [2024-12-06 15:24:43.117646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:37.299 [2024-12-06 15:24:43.117660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b5120 is same with the state(6) to be set 00:07:37.299 [2024-12-06 15:24:43.117906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.299 [2024-12-06 15:24:43.117916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.299 [2024-12-06 15:24:43.117937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.299 [2024-12-06 15:24:43.117952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.299 [2024-12-06 15:24:43.117967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.299 [2024-12-06 15:24:43.117976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.117983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.117991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.117998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.300 [2024-12-06 15:24:43.118421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.300 [2024-12-06 15:24:43.118428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 [2024-12-06 15:24:43.118860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.301 [2024-12-06 15:24:43.118867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.301 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.301 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:37.302 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.302 [2024-12-06 15:24:43.119797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:37.302 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.302 task offset: 24576 on job bdev=Nvme0n1 fails 00:07:37.302 00:07:37.302 Latency(us) 00:07:37.302 [2024-12-06T14:24:43.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.302 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:37.302 Job: Nvme0n1 ended in about 0.61 seconds with error 00:07:37.302 Verification LBA range: start 0x0 length 0x400 00:07:37.302 Nvme0n1 : 0.61 2001.16 125.07 105.32 0.00 29763.87 1599.39 26963.38 00:07:37.302 [2024-12-06T14:24:43.300Z] =================================================================================================================== 00:07:37.302 [2024-12-06T14:24:43.300Z] Total : 2001.16 125.07 105.32 0.00 29763.87 1599.39 26963.38 00:07:37.302 [2024-12-06 15:24:43.122151] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.302 [2024-12-06 15:24:43.122170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b5120 (9): Bad file descriptor 00:07:37.302 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.302 15:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:37.302 [2024-12-06 15:24:43.174738] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2852535 00:07:38.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2852535) - No such process 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:38.239 { 00:07:38.239 "params": { 00:07:38.239 "name": "Nvme$subsystem", 00:07:38.239 "trtype": "$TEST_TRANSPORT", 00:07:38.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:38.239 "adrfam": "ipv4", 00:07:38.239 "trsvcid": "$NVMF_PORT", 00:07:38.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:38.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:38.239 "hdgst": ${hdgst:-false}, 00:07:38.239 "ddgst": ${ddgst:-false} 00:07:38.239 }, 00:07:38.239 "method": "bdev_nvme_attach_controller" 00:07:38.239 } 00:07:38.239 EOF 00:07:38.239 )") 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:38.239 15:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:38.239 "params": { 00:07:38.239 "name": "Nvme0", 00:07:38.239 "trtype": "tcp", 00:07:38.239 "traddr": "10.0.0.2", 00:07:38.239 "adrfam": "ipv4", 00:07:38.239 "trsvcid": "4420", 00:07:38.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:38.239 "hdgst": false, 00:07:38.239 "ddgst": false 00:07:38.239 }, 00:07:38.239 "method": "bdev_nvme_attach_controller" 00:07:38.239 }' 00:07:38.239 [2024-12-06 15:24:44.183591] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:07:38.239 [2024-12-06 15:24:44.183642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852860 ] 00:07:38.497 [2024-12-06 15:24:44.258636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.497 [2024-12-06 15:24:44.300672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.497 Running I/O for 1 seconds... 00:07:39.869 1995.00 IOPS, 124.69 MiB/s 00:07:39.869 Latency(us) 00:07:39.869 [2024-12-06T14:24:45.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.869 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:39.869 Verification LBA range: start 0x0 length 0x400 00:07:39.869 Nvme0n1 : 1.01 2038.82 127.43 0.00 0.00 30689.99 2886.70 26588.89 00:07:39.869 [2024-12-06T14:24:45.867Z] =================================================================================================================== 00:07:39.869 [2024-12-06T14:24:45.867Z] Total : 2038.82 127.43 0.00 0.00 30689.99 2886.70 26588.89 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:39.869 rmmod nvme_tcp 00:07:39.869 rmmod nvme_fabrics 00:07:39.869 rmmod nvme_keyring 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2852267 ']' 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2852267 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2852267 ']' 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2852267 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2852267 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2852267' 00:07:39.869 killing process with pid 2852267 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2852267 00:07:39.869 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2852267 00:07:40.128 [2024-12-06 15:24:45.932873] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.128 15:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:42.659 00:07:42.659 real 0m13.197s 00:07:42.659 user 0m23.003s 00:07:42.659 sys 0m5.734s 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.659 ************************************ 00:07:42.659 END TEST nvmf_host_management 00:07:42.659 ************************************ 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.659 ************************************ 00:07:42.659 START TEST nvmf_lvol 00:07:42.659 ************************************ 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:42.659 * Looking for test storage... 00:07:42.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.659 --rc genhtml_branch_coverage=1 00:07:42.659 --rc genhtml_function_coverage=1 00:07:42.659 --rc genhtml_legend=1 00:07:42.659 --rc geninfo_all_blocks=1 00:07:42.659 --rc geninfo_unexecuted_blocks=1 00:07:42.659 00:07:42.659 ' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.659 --rc genhtml_branch_coverage=1 00:07:42.659 --rc genhtml_function_coverage=1 00:07:42.659 --rc genhtml_legend=1 00:07:42.659 --rc geninfo_all_blocks=1 00:07:42.659 --rc geninfo_unexecuted_blocks=1 00:07:42.659 00:07:42.659 ' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.659 --rc genhtml_branch_coverage=1 00:07:42.659 --rc genhtml_function_coverage=1 00:07:42.659 --rc genhtml_legend=1 00:07:42.659 --rc geninfo_all_blocks=1 00:07:42.659 --rc geninfo_unexecuted_blocks=1 00:07:42.659 00:07:42.659 ' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.659 --rc genhtml_branch_coverage=1 00:07:42.659 --rc genhtml_function_coverage=1 00:07:42.659 --rc genhtml_legend=1 00:07:42.659 --rc geninfo_all_blocks=1 00:07:42.659 --rc geninfo_unexecuted_blocks=1 00:07:42.659 00:07:42.659 ' 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.659 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.660 15:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.229 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:49.230 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:49.230 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:49.230 Found net devices under 0000:86:00.0: cvl_0_0 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:49.230 Found net devices under 0000:86:00.1: cvl_0_1 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:07:49.230 00:07:49.230 --- 10.0.0.2 ping statistics --- 00:07:49.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.230 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:07:49.230 00:07:49.230 --- 10.0.0.1 ping statistics --- 00:07:49.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.230 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.230 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2856779 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2856779 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2856779 ']' 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.231 15:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 [2024-12-06 15:24:54.379107] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:07:49.231 [2024-12-06 15:24:54.379148] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.231 [2024-12-06 15:24:54.460091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.231 [2024-12-06 15:24:54.501231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.231 [2024-12-06 15:24:54.501266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.231 [2024-12-06 15:24:54.501272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.231 [2024-12-06 15:24:54.501279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.231 [2024-12-06 15:24:54.501283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.231 [2024-12-06 15:24:54.502588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.231 [2024-12-06 15:24:54.502698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.231 [2024-12-06 15:24:54.502700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.231 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.231 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:49.231 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:49.231 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:49.231 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:49.489 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.489 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:49.489 [2024-12-06 15:24:55.418818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.489 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:49.748 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:49.748 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:50.050 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:50.050 15:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:50.342 15:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:50.342 15:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f0cdd180-d3d8-4b8a-9952-79f130a5bb91 00:07:50.342 15:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f0cdd180-d3d8-4b8a-9952-79f130a5bb91 lvol 20 00:07:50.601 15:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=20b57789-e381-4109-b7e9-9f66d7b7379b 00:07:50.601 15:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:50.859 15:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 20b57789-e381-4109-b7e9-9f66d7b7379b 00:07:51.119 15:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:51.119 [2024-12-06 15:24:57.095169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.119 15:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.377 15:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2857285 00:07:51.377 15:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:51.377 15:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:52.751 15:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 20b57789-e381-4109-b7e9-9f66d7b7379b MY_SNAPSHOT 00:07:52.751 15:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=593c6e7f-2c02-4905-ae64-fe6bd671eda2 00:07:52.751 15:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 20b57789-e381-4109-b7e9-9f66d7b7379b 30 00:07:53.010 15:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 593c6e7f-2c02-4905-ae64-fe6bd671eda2 MY_CLONE 00:07:53.268 15:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=55c6c7f5-4adc-4776-85b6-1bc13b5aef01 00:07:53.269 15:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 55c6c7f5-4adc-4776-85b6-1bc13b5aef01 00:07:53.835 15:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2857285 00:08:01.944 Initializing NVMe Controllers 00:08:01.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:01.944 Controller IO queue size 128, less than required. 00:08:01.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:01.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:01.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:01.944 Initialization complete. Launching workers. 00:08:01.944 ======================================================== 00:08:01.944 Latency(us) 00:08:01.944 Device Information : IOPS MiB/s Average min max 00:08:01.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12463.10 48.68 10272.69 1010.87 62330.63 00:08:01.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12383.80 48.37 10334.59 3514.10 61026.39 00:08:01.944 ======================================================== 00:08:01.944 Total : 24846.90 97.06 10303.54 1010.87 62330.63 00:08:01.944 00:08:01.944 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.944 15:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 20b57789-e381-4109-b7e9-9f66d7b7379b 00:08:02.203 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f0cdd180-d3d8-4b8a-9952-79f130a5bb91 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.463 rmmod nvme_tcp 00:08:02.463 rmmod nvme_fabrics 00:08:02.463 rmmod nvme_keyring 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2856779 ']' 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2856779 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2856779 ']' 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2856779 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2856779 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2856779' 00:08:02.463 killing process with pid 2856779 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2856779 00:08:02.463 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2856779 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.723 15:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:05.259 00:08:05.259 real 0m22.603s 00:08:05.259 user 1m5.054s 00:08:05.259 sys 0m7.705s 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.259 ************************************ 00:08:05.259 END TEST nvmf_lvol 00:08:05.259 ************************************ 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.259 ************************************ 00:08:05.259 START TEST nvmf_lvs_grow 00:08:05.259 ************************************ 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:05.259 * Looking for test storage... 00:08:05.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.259 --rc genhtml_branch_coverage=1 00:08:05.259 --rc genhtml_function_coverage=1 00:08:05.259 --rc genhtml_legend=1 00:08:05.259 --rc geninfo_all_blocks=1 00:08:05.259 --rc geninfo_unexecuted_blocks=1 00:08:05.259 00:08:05.259 ' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.259 --rc genhtml_branch_coverage=1 00:08:05.259 --rc genhtml_function_coverage=1 00:08:05.259 --rc genhtml_legend=1 00:08:05.259 --rc geninfo_all_blocks=1 00:08:05.259 --rc geninfo_unexecuted_blocks=1 00:08:05.259 00:08:05.259 ' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.259 --rc genhtml_branch_coverage=1 00:08:05.259 --rc genhtml_function_coverage=1 00:08:05.259 --rc genhtml_legend=1 00:08:05.259 --rc geninfo_all_blocks=1 00:08:05.259 --rc geninfo_unexecuted_blocks=1 00:08:05.259 00:08:05.259 ' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:05.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.259 --rc genhtml_branch_coverage=1 00:08:05.259 --rc genhtml_function_coverage=1 00:08:05.259 --rc genhtml_legend=1 00:08:05.259 --rc geninfo_all_blocks=1 00:08:05.259 --rc geninfo_unexecuted_blocks=1 00:08:05.259 00:08:05.259 ' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:05.259 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:05.260 15:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:11.848 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:11.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:11.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:11.849 Found net devices under 0000:86:00.0: cvl_0_0 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:11.849 Found net devices under 0000:86:00.1: cvl_0_1 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:11.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:08:11.849 00:08:11.849 --- 10.0.0.2 ping statistics --- 00:08:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.849 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:08:11.849 00:08:11.849 --- 10.0.0.1 ping statistics --- 00:08:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.849 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.849 15:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2862664 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2862664 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2862664 ']' 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.849 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.850 [2024-12-06 15:25:17.068448] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:11.850 [2024-12-06 15:25:17.068496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.850 [2024-12-06 15:25:17.146209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.850 [2024-12-06 15:25:17.185467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.850 [2024-12-06 15:25:17.185502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.850 [2024-12-06 15:25:17.185509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.850 [2024-12-06 15:25:17.185516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.850 [2024-12-06 15:25:17.185521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.850 [2024-12-06 15:25:17.186041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:11.850 [2024-12-06 15:25:17.489647] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.850 ************************************ 00:08:11.850 START TEST lvs_grow_clean 00:08:11.850 ************************************ 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:11.850 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:12.108 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:12.108 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:12.108 15:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:12.367 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:12.367 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:12.367 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a lvol 150 00:08:12.626 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3cab264f-b6b7-4b13-b093-0090f94e3bb5 00:08:12.626 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.626 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:12.626 [2024-12-06 15:25:18.527244] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:12.626 [2024-12-06 15:25:18.527296] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:12.626 true 00:08:12.626 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:12.626 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:12.885 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:12.885 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:13.144 15:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3cab264f-b6b7-4b13-b093-0090f94e3bb5 00:08:13.144 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:13.403 [2024-12-06 15:25:19.265498] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.403 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2863167 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2863167 /var/tmp/bdevperf.sock 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2863167 ']' 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.662 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.662 [2024-12-06 15:25:19.503708] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:13.662 [2024-12-06 15:25:19.503752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863167 ] 00:08:13.662 [2024-12-06 15:25:19.578821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.662 [2024-12-06 15:25:19.618851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.920 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.920 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:13.920 15:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:14.179 Nvme0n1 00:08:14.179 15:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:14.437 [ 00:08:14.437 { 00:08:14.437 "name": "Nvme0n1", 00:08:14.437 "aliases": [ 00:08:14.437 "3cab264f-b6b7-4b13-b093-0090f94e3bb5" 00:08:14.437 ], 00:08:14.437 "product_name": "NVMe disk", 00:08:14.437 "block_size": 4096, 00:08:14.437 "num_blocks": 38912, 00:08:14.437 "uuid": "3cab264f-b6b7-4b13-b093-0090f94e3bb5", 00:08:14.437 "numa_id": 1, 00:08:14.437 "assigned_rate_limits": { 00:08:14.437 "rw_ios_per_sec": 0, 00:08:14.437 "rw_mbytes_per_sec": 0, 00:08:14.437 "r_mbytes_per_sec": 0, 00:08:14.437 "w_mbytes_per_sec": 0 00:08:14.437 }, 00:08:14.437 "claimed": false, 00:08:14.437 "zoned": false, 00:08:14.437 "supported_io_types": { 00:08:14.437 "read": true, 00:08:14.437 "write": true, 00:08:14.437 "unmap": true, 00:08:14.437 "flush": true, 00:08:14.437 "reset": true, 00:08:14.437 "nvme_admin": true, 00:08:14.437 "nvme_io": true, 00:08:14.437 "nvme_io_md": false, 00:08:14.437 "write_zeroes": true, 00:08:14.437 "zcopy": false, 00:08:14.437 "get_zone_info": false, 00:08:14.438 "zone_management": false, 00:08:14.438 "zone_append": false, 00:08:14.438 "compare": true, 00:08:14.438 "compare_and_write": true, 00:08:14.438 "abort": true, 00:08:14.438 "seek_hole": false, 00:08:14.438 "seek_data": false, 00:08:14.438 "copy": true, 00:08:14.438 "nvme_iov_md": false 00:08:14.438 }, 00:08:14.438 "memory_domains": [ 00:08:14.438 { 00:08:14.438 "dma_device_id": "system", 00:08:14.438 "dma_device_type": 1 00:08:14.438 } 00:08:14.438 ], 00:08:14.438 "driver_specific": { 00:08:14.438 "nvme": [ 00:08:14.438 { 00:08:14.438 "trid": { 00:08:14.438 "trtype": "TCP", 00:08:14.438 "adrfam": "IPv4", 00:08:14.438 "traddr": "10.0.0.2", 00:08:14.438 "trsvcid": "4420", 00:08:14.438 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:14.438 }, 00:08:14.438 "ctrlr_data": { 00:08:14.438 "cntlid": 1, 00:08:14.438 "vendor_id": "0x8086", 00:08:14.438 "model_number": "SPDK bdev Controller", 00:08:14.438 "serial_number": "SPDK0", 00:08:14.438 "firmware_revision": "25.01", 00:08:14.438 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.438 "oacs": { 00:08:14.438 "security": 0, 00:08:14.438 "format": 0, 00:08:14.438 "firmware": 0, 00:08:14.438 "ns_manage": 0 00:08:14.438 }, 00:08:14.438 "multi_ctrlr": true, 00:08:14.438 "ana_reporting": false 00:08:14.438 }, 00:08:14.438 "vs": { 00:08:14.438 "nvme_version": "1.3" 00:08:14.438 }, 00:08:14.438 "ns_data": { 00:08:14.438 "id": 1, 00:08:14.438 "can_share": true 00:08:14.438 } 00:08:14.438 } 00:08:14.438 ], 00:08:14.438 "mp_policy": "active_passive" 00:08:14.438 } 00:08:14.438 } 00:08:14.438 ] 00:08:14.438 15:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2863283 00:08:14.438 15:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:14.438 15:25:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:14.438 Running I/O for 10 seconds... 00:08:15.815 Latency(us) 00:08:15.815 [2024-12-06T14:25:21.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.815 Nvme0n1 : 1.00 23354.00 91.23 0.00 0.00 0.00 0.00 0.00 00:08:15.815 [2024-12-06T14:25:21.813Z] =================================================================================================================== 00:08:15.815 [2024-12-06T14:25:21.813Z] Total : 23354.00 91.23 0.00 0.00 0.00 0.00 0.00 00:08:15.815 00:08:16.380 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:16.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.639 Nvme0n1 : 2.00 23546.00 91.98 0.00 0.00 0.00 0.00 0.00 00:08:16.639 [2024-12-06T14:25:22.637Z] =================================================================================================================== 00:08:16.639 [2024-12-06T14:25:22.637Z] Total : 23546.00 91.98 0.00 0.00 0.00 0.00 0.00 00:08:16.639 00:08:16.639 true 00:08:16.639 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:16.639 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:16.898 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:16.898 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:16.898 15:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2863283 00:08:17.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.466 Nvme0n1 : 3.00 23636.33 92.33 0.00 0.00 0.00 0.00 0.00 00:08:17.466 [2024-12-06T14:25:23.464Z] =================================================================================================================== 00:08:17.466 [2024-12-06T14:25:23.464Z] Total : 23636.33 92.33 0.00 0.00 0.00 0.00 0.00 00:08:17.466 00:08:18.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.845 Nvme0n1 : 4.00 23724.25 92.67 0.00 0.00 0.00 0.00 0.00 00:08:18.845 [2024-12-06T14:25:24.843Z] =================================================================================================================== 00:08:18.845 [2024-12-06T14:25:24.843Z] Total : 23724.25 92.67 0.00 0.00 0.00 0.00 0.00 00:08:18.845 00:08:19.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.779 Nvme0n1 : 5.00 23796.80 92.96 0.00 0.00 0.00 0.00 0.00 00:08:19.779 [2024-12-06T14:25:25.777Z] =================================================================================================================== 00:08:19.779 [2024-12-06T14:25:25.777Z] Total : 23796.80 92.96 0.00 0.00 0.00 0.00 0.00 00:08:19.779 00:08:20.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.714 Nvme0n1 : 6.00 23839.33 93.12 0.00 0.00 0.00 0.00 0.00 00:08:20.714 [2024-12-06T14:25:26.712Z] =================================================================================================================== 00:08:20.714 [2024-12-06T14:25:26.712Z] Total : 23839.33 93.12 0.00 0.00 0.00 0.00 0.00 00:08:20.714 00:08:21.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.661 Nvme0n1 : 7.00 23869.86 93.24 0.00 0.00 0.00 0.00 0.00 00:08:21.661 [2024-12-06T14:25:27.659Z] =================================================================================================================== 00:08:21.661 [2024-12-06T14:25:27.659Z] Total : 23869.86 93.24 0.00 0.00 0.00 0.00 0.00 00:08:21.661 00:08:22.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.595 Nvme0n1 : 8.00 23898.25 93.35 0.00 0.00 0.00 0.00 0.00 00:08:22.595 [2024-12-06T14:25:28.593Z] =================================================================================================================== 00:08:22.595 [2024-12-06T14:25:28.593Z] Total : 23898.25 93.35 0.00 0.00 0.00 0.00 0.00 00:08:22.595 00:08:23.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.529 Nvme0n1 : 9.00 23917.56 93.43 0.00 0.00 0.00 0.00 0.00 00:08:23.529 [2024-12-06T14:25:29.527Z] =================================================================================================================== 00:08:23.529 [2024-12-06T14:25:29.527Z] Total : 23917.56 93.43 0.00 0.00 0.00 0.00 0.00 00:08:23.529 00:08:24.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.463 Nvme0n1 : 10.00 23896.10 93.34 0.00 0.00 0.00 0.00 0.00 00:08:24.463 [2024-12-06T14:25:30.461Z] =================================================================================================================== 00:08:24.463 [2024-12-06T14:25:30.461Z] Total : 23896.10 93.34 0.00 0.00 0.00 0.00 0.00 00:08:24.463 00:08:24.463 00:08:24.463 Latency(us) 00:08:24.463 [2024-12-06T14:25:30.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.463 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.463 Nvme0n1 : 10.00 23900.61 93.36 0.00 0.00 5352.43 3105.16 13731.35 00:08:24.463 [2024-12-06T14:25:30.461Z] =================================================================================================================== 00:08:24.463 [2024-12-06T14:25:30.461Z] Total : 23900.61 93.36 0.00 0.00 5352.43 3105.16 13731.35 00:08:24.463 { 00:08:24.463 "results": [ 00:08:24.463 { 00:08:24.463 "job": "Nvme0n1", 00:08:24.463 "core_mask": "0x2", 00:08:24.463 "workload": "randwrite", 00:08:24.463 "status": "finished", 00:08:24.463 "queue_depth": 128, 00:08:24.463 "io_size": 4096, 00:08:24.463 "runtime": 10.00347, 00:08:24.463 "iops": 23900.60648954813, 00:08:24.463 "mibps": 93.36174409979738, 00:08:24.463 "io_failed": 0, 00:08:24.463 "io_timeout": 0, 00:08:24.463 "avg_latency_us": 5352.432723626129, 00:08:24.463 "min_latency_us": 3105.158095238095, 00:08:24.463 "max_latency_us": 13731.352380952381 00:08:24.463 } 00:08:24.463 ], 00:08:24.463 "core_count": 1 00:08:24.463 } 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2863167 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2863167 ']' 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2863167 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863167 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863167' 00:08:24.720 killing process with pid 2863167 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2863167 00:08:24.720 Received shutdown signal, test time was about 10.000000 seconds 00:08:24.720 00:08:24.720 Latency(us) 00:08:24.720 [2024-12-06T14:25:30.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.720 [2024-12-06T14:25:30.718Z] =================================================================================================================== 00:08:24.720 [2024-12-06T14:25:30.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2863167 00:08:24.720 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.977 15:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:25.234 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:25.234 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:25.494 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:25.494 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:25.494 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.494 [2024-12-06 15:25:31.450074] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.752 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:25.752 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:25.752 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:25.752 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.752 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.752 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.752 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:25.753 request: 00:08:25.753 { 00:08:25.753 "uuid": "505c2ab7-4df9-4e72-be9d-15ed2a64672a", 00:08:25.753 "method": "bdev_lvol_get_lvstores", 00:08:25.753 "req_id": 1 00:08:25.753 } 00:08:25.753 Got JSON-RPC error response 00:08:25.753 response: 00:08:25.753 { 00:08:25.753 "code": -19, 00:08:25.753 "message": "No such device" 00:08:25.753 } 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.753 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.011 aio_bdev 00:08:26.011 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3cab264f-b6b7-4b13-b093-0090f94e3bb5 00:08:26.011 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3cab264f-b6b7-4b13-b093-0090f94e3bb5 00:08:26.011 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.011 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:26.011 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.011 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.011 15:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:26.271 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3cab264f-b6b7-4b13-b093-0090f94e3bb5 -t 2000 00:08:26.271 [ 00:08:26.271 { 00:08:26.271 "name": "3cab264f-b6b7-4b13-b093-0090f94e3bb5", 00:08:26.271 "aliases": [ 00:08:26.271 "lvs/lvol" 00:08:26.271 ], 00:08:26.271 "product_name": "Logical Volume", 00:08:26.271 "block_size": 4096, 00:08:26.271 "num_blocks": 38912, 00:08:26.271 "uuid": "3cab264f-b6b7-4b13-b093-0090f94e3bb5", 00:08:26.271 "assigned_rate_limits": { 00:08:26.271 "rw_ios_per_sec": 0, 00:08:26.271 "rw_mbytes_per_sec": 0, 00:08:26.271 "r_mbytes_per_sec": 0, 00:08:26.271 "w_mbytes_per_sec": 0 00:08:26.271 }, 00:08:26.271 "claimed": false, 00:08:26.271 "zoned": false, 00:08:26.271 "supported_io_types": { 00:08:26.271 "read": true, 00:08:26.271 "write": true, 00:08:26.271 "unmap": true, 00:08:26.271 "flush": false, 00:08:26.271 "reset": true, 00:08:26.271 "nvme_admin": false, 00:08:26.271 "nvme_io": false, 00:08:26.271 "nvme_io_md": false, 00:08:26.271 "write_zeroes": true, 00:08:26.271 "zcopy": false, 00:08:26.271 "get_zone_info": false, 00:08:26.271 "zone_management": false, 00:08:26.271 "zone_append": false, 00:08:26.271 "compare": false, 00:08:26.271 "compare_and_write": false, 00:08:26.271 "abort": false, 00:08:26.271 "seek_hole": true, 00:08:26.271 "seek_data": true, 00:08:26.271 "copy": false, 00:08:26.271 "nvme_iov_md": false 00:08:26.271 }, 00:08:26.271 "driver_specific": { 00:08:26.271 "lvol": { 00:08:26.271 "lvol_store_uuid": "505c2ab7-4df9-4e72-be9d-15ed2a64672a", 00:08:26.271 "base_bdev": "aio_bdev", 00:08:26.271 "thin_provision": false, 00:08:26.271 "num_allocated_clusters": 38, 00:08:26.271 "snapshot": false, 00:08:26.271 "clone": false, 00:08:26.271 "esnap_clone": false 00:08:26.271 } 00:08:26.271 } 00:08:26.271 } 00:08:26.271 ] 00:08:26.271 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:26.271 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:26.271 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:26.530 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:26.530 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:26.530 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:26.789 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:26.789 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3cab264f-b6b7-4b13-b093-0090f94e3bb5 00:08:27.048 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 505c2ab7-4df9-4e72-be9d-15ed2a64672a 00:08:27.048 15:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.307 00:08:27.307 real 0m15.660s 00:08:27.307 user 0m15.257s 00:08:27.307 sys 0m1.450s 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:27.307 ************************************ 00:08:27.307 END TEST lvs_grow_clean 00:08:27.307 ************************************ 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:27.307 ************************************ 00:08:27.307 START TEST lvs_grow_dirty 00:08:27.307 ************************************ 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.307 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.566 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:27.566 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.826 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0d4890d1-9326-443c-b95e-60287f260eae 00:08:27.826 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:27.826 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:28.085 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:28.085 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:28.085 15:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0d4890d1-9326-443c-b95e-60287f260eae lvol 150 00:08:28.085 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9327241d-f275-45c4-8ec9-25572088aa3f 00:08:28.085 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.085 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:28.369 [2024-12-06 15:25:34.250319] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:28.369 [2024-12-06 15:25:34.250397] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:28.369 true 00:08:28.369 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:28.369 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:28.629 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:28.629 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.629 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9327241d-f275-45c4-8ec9-25572088aa3f 00:08:28.887 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:29.146 [2024-12-06 15:25:34.968470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.146 15:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2865775 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2865775 /var/tmp/bdevperf.sock 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2865775 ']' 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.404 15:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.404 [2024-12-06 15:25:35.216001] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:29.404 [2024-12-06 15:25:35.216048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865775 ] 00:08:29.404 [2024-12-06 15:25:35.290064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.404 [2024-12-06 15:25:35.331333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.339 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.339 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:30.339 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:30.598 Nvme0n1 00:08:30.598 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:30.857 [ 00:08:30.857 { 00:08:30.857 "name": "Nvme0n1", 00:08:30.857 "aliases": [ 00:08:30.857 "9327241d-f275-45c4-8ec9-25572088aa3f" 00:08:30.857 ], 00:08:30.857 "product_name": "NVMe disk", 00:08:30.857 "block_size": 4096, 00:08:30.857 "num_blocks": 38912, 00:08:30.857 "uuid": "9327241d-f275-45c4-8ec9-25572088aa3f", 00:08:30.857 "numa_id": 1, 00:08:30.857 "assigned_rate_limits": { 00:08:30.857 "rw_ios_per_sec": 0, 00:08:30.857 "rw_mbytes_per_sec": 0, 00:08:30.857 "r_mbytes_per_sec": 0, 00:08:30.857 "w_mbytes_per_sec": 0 00:08:30.857 }, 00:08:30.857 "claimed": false, 00:08:30.857 "zoned": false, 00:08:30.857 "supported_io_types": { 00:08:30.857 "read": true, 00:08:30.857 "write": true, 00:08:30.857 "unmap": true, 00:08:30.857 "flush": true, 00:08:30.857 "reset": true, 00:08:30.857 "nvme_admin": true, 00:08:30.857 "nvme_io": true, 00:08:30.857 "nvme_io_md": false, 00:08:30.857 "write_zeroes": true, 00:08:30.857 "zcopy": false, 00:08:30.857 "get_zone_info": false, 00:08:30.857 "zone_management": false, 00:08:30.857 "zone_append": false, 00:08:30.857 "compare": true, 00:08:30.857 "compare_and_write": true, 00:08:30.857 "abort": true, 00:08:30.857 "seek_hole": false, 00:08:30.857 "seek_data": false, 00:08:30.857 "copy": true, 00:08:30.857 "nvme_iov_md": false 00:08:30.857 }, 00:08:30.857 "memory_domains": [ 00:08:30.857 { 00:08:30.857 "dma_device_id": "system", 00:08:30.857 "dma_device_type": 1 00:08:30.857 } 00:08:30.857 ], 00:08:30.857 "driver_specific": { 00:08:30.857 "nvme": [ 00:08:30.857 { 00:08:30.857 "trid": { 00:08:30.857 "trtype": "TCP", 00:08:30.857 "adrfam": "IPv4", 00:08:30.857 "traddr": "10.0.0.2", 00:08:30.857 "trsvcid": "4420", 00:08:30.857 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:30.857 }, 00:08:30.857 "ctrlr_data": { 00:08:30.857 "cntlid": 1, 00:08:30.857 "vendor_id": "0x8086", 00:08:30.857 "model_number": "SPDK bdev Controller", 00:08:30.857 "serial_number": "SPDK0", 00:08:30.857 "firmware_revision": "25.01", 00:08:30.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.857 "oacs": { 00:08:30.857 "security": 0, 00:08:30.857 "format": 0, 00:08:30.857 "firmware": 0, 00:08:30.857 "ns_manage": 0 00:08:30.857 }, 00:08:30.857 "multi_ctrlr": true, 00:08:30.857 "ana_reporting": false 00:08:30.857 }, 00:08:30.857 "vs": { 00:08:30.857 "nvme_version": "1.3" 00:08:30.857 }, 00:08:30.857 "ns_data": { 00:08:30.857 "id": 1, 00:08:30.857 "can_share": true 00:08:30.857 } 00:08:30.857 } 00:08:30.857 ], 00:08:30.857 "mp_policy": "active_passive" 00:08:30.857 } 00:08:30.857 } 00:08:30.857 ] 00:08:30.857 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2866009 00:08:30.857 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:30.857 15:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:30.857 Running I/O for 10 seconds... 00:08:31.793 Latency(us) 00:08:31.793 [2024-12-06T14:25:37.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.793 Nvme0n1 : 1.00 23648.00 92.38 0.00 0.00 0.00 0.00 0.00 00:08:31.793 [2024-12-06T14:25:37.791Z] =================================================================================================================== 00:08:31.793 [2024-12-06T14:25:37.791Z] Total : 23648.00 92.38 0.00 0.00 0.00 0.00 0.00 00:08:31.794 00:08:32.730 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:32.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.989 Nvme0n1 : 2.00 23744.50 92.75 0.00 0.00 0.00 0.00 0.00 00:08:32.989 [2024-12-06T14:25:38.987Z] =================================================================================================================== 00:08:32.989 [2024-12-06T14:25:38.987Z] Total : 23744.50 92.75 0.00 0.00 0.00 0.00 0.00 00:08:32.989 00:08:32.989 true 00:08:32.989 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:32.989 15:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:33.248 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:33.248 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:33.248 15:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2866009 00:08:33.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.816 Nvme0n1 : 3.00 23790.00 92.93 0.00 0.00 0.00 0.00 0.00 00:08:33.816 [2024-12-06T14:25:39.814Z] =================================================================================================================== 00:08:33.816 [2024-12-06T14:25:39.814Z] Total : 23790.00 92.93 0.00 0.00 0.00 0.00 0.00 00:08:33.816 00:08:34.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.751 Nvme0n1 : 4.00 23717.00 92.64 0.00 0.00 0.00 0.00 0.00 00:08:34.751 [2024-12-06T14:25:40.749Z] =================================================================================================================== 00:08:34.751 [2024-12-06T14:25:40.749Z] Total : 23717.00 92.64 0.00 0.00 0.00 0.00 0.00 00:08:34.751 00:08:36.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.128 Nvme0n1 : 5.00 23787.60 92.92 0.00 0.00 0.00 0.00 0.00 00:08:36.128 [2024-12-06T14:25:42.126Z] =================================================================================================================== 00:08:36.128 [2024-12-06T14:25:42.126Z] Total : 23787.60 92.92 0.00 0.00 0.00 0.00 0.00 00:08:36.128 00:08:37.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.063 Nvme0n1 : 6.00 23841.00 93.13 0.00 0.00 0.00 0.00 0.00 00:08:37.063 [2024-12-06T14:25:43.061Z] =================================================================================================================== 00:08:37.063 [2024-12-06T14:25:43.061Z] Total : 23841.00 93.13 0.00 0.00 0.00 0.00 0.00 00:08:37.063 00:08:37.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.997 Nvme0n1 : 7.00 23882.57 93.29 0.00 0.00 0.00 0.00 0.00 00:08:37.997 [2024-12-06T14:25:43.995Z] =================================================================================================================== 00:08:37.997 [2024-12-06T14:25:43.995Z] Total : 23882.57 93.29 0.00 0.00 0.00 0.00 0.00 00:08:37.997 00:08:38.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.953 Nvme0n1 : 8.00 23913.62 93.41 0.00 0.00 0.00 0.00 0.00 00:08:38.953 [2024-12-06T14:25:44.951Z] =================================================================================================================== 00:08:38.953 [2024-12-06T14:25:44.951Z] Total : 23913.62 93.41 0.00 0.00 0.00 0.00 0.00 00:08:38.953 00:08:39.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.889 Nvme0n1 : 9.00 23937.67 93.51 0.00 0.00 0.00 0.00 0.00 00:08:39.889 [2024-12-06T14:25:45.887Z] =================================================================================================================== 00:08:39.889 [2024-12-06T14:25:45.887Z] Total : 23937.67 93.51 0.00 0.00 0.00 0.00 0.00 00:08:39.889 00:08:40.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.822 Nvme0n1 : 10.00 23963.70 93.61 0.00 0.00 0.00 0.00 0.00 00:08:40.822 [2024-12-06T14:25:46.820Z] =================================================================================================================== 00:08:40.822 [2024-12-06T14:25:46.820Z] Total : 23963.70 93.61 0.00 0.00 0.00 0.00 0.00 00:08:40.822 00:08:40.822 00:08:40.822 Latency(us) 00:08:40.822 [2024-12-06T14:25:46.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.822 Nvme0n1 : 10.00 23965.31 93.61 0.00 0.00 5338.09 1505.77 11047.50 00:08:40.822 [2024-12-06T14:25:46.820Z] =================================================================================================================== 00:08:40.822 [2024-12-06T14:25:46.820Z] Total : 23965.31 93.61 0.00 0.00 5338.09 1505.77 11047.50 00:08:40.822 { 00:08:40.822 "results": [ 00:08:40.822 { 00:08:40.822 "job": "Nvme0n1", 00:08:40.822 "core_mask": "0x2", 00:08:40.822 "workload": "randwrite", 00:08:40.822 "status": "finished", 00:08:40.822 "queue_depth": 128, 00:08:40.822 "io_size": 4096, 00:08:40.822 "runtime": 10.004668, 00:08:40.822 "iops": 23965.312991895382, 00:08:40.822 "mibps": 93.61450387459134, 00:08:40.822 "io_failed": 0, 00:08:40.822 "io_timeout": 0, 00:08:40.822 "avg_latency_us": 5338.093012010768, 00:08:40.822 "min_latency_us": 1505.767619047619, 00:08:40.822 "max_latency_us": 11047.497142857143 00:08:40.822 } 00:08:40.822 ], 00:08:40.822 "core_count": 1 00:08:40.822 } 00:08:40.822 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2865775 00:08:40.822 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2865775 ']' 00:08:40.822 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2865775 00:08:40.822 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:40.822 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.822 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865775 00:08:41.080 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.080 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.080 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865775' 00:08:41.080 killing process with pid 2865775 00:08:41.080 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2865775 00:08:41.080 Received shutdown signal, test time was about 10.000000 seconds 00:08:41.080 00:08:41.080 Latency(us) 00:08:41.080 [2024-12-06T14:25:47.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.080 [2024-12-06T14:25:47.078Z] =================================================================================================================== 00:08:41.080 [2024-12-06T14:25:47.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:41.080 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2865775 00:08:41.080 15:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.416 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2862664 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2862664 00:08:41.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2862664 Killed "${NVMF_APP[@]}" "$@" 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2867858 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2867858 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2867858 ']' 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.683 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 [2024-12-06 15:25:47.708441] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:42.008 [2024-12-06 15:25:47.708487] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.008 [2024-12-06 15:25:47.786919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.008 [2024-12-06 15:25:47.827338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.008 [2024-12-06 15:25:47.827380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.008 [2024-12-06 15:25:47.827387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.008 [2024-12-06 15:25:47.827394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.008 [2024-12-06 15:25:47.827399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.008 [2024-12-06 15:25:47.827952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.008 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.008 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:42.008 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.008 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.008 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.268 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.268 15:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.268 [2024-12-06 15:25:48.143487] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:42.268 [2024-12-06 15:25:48.143577] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:42.268 [2024-12-06 15:25:48.143610] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9327241d-f275-45c4-8ec9-25572088aa3f 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9327241d-f275-45c4-8ec9-25572088aa3f 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.268 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.527 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9327241d-f275-45c4-8ec9-25572088aa3f -t 2000 00:08:42.786 [ 00:08:42.786 { 00:08:42.786 "name": "9327241d-f275-45c4-8ec9-25572088aa3f", 00:08:42.786 "aliases": [ 00:08:42.786 "lvs/lvol" 00:08:42.786 ], 00:08:42.786 "product_name": "Logical Volume", 00:08:42.786 "block_size": 4096, 00:08:42.786 "num_blocks": 38912, 00:08:42.786 "uuid": "9327241d-f275-45c4-8ec9-25572088aa3f", 00:08:42.786 "assigned_rate_limits": { 00:08:42.786 "rw_ios_per_sec": 0, 00:08:42.786 "rw_mbytes_per_sec": 0, 00:08:42.786 "r_mbytes_per_sec": 0, 00:08:42.786 "w_mbytes_per_sec": 0 00:08:42.786 }, 00:08:42.786 "claimed": false, 00:08:42.786 "zoned": false, 00:08:42.786 "supported_io_types": { 00:08:42.786 "read": true, 00:08:42.786 "write": true, 00:08:42.786 "unmap": true, 00:08:42.786 "flush": false, 00:08:42.786 "reset": true, 00:08:42.786 "nvme_admin": false, 00:08:42.786 "nvme_io": false, 00:08:42.786 "nvme_io_md": false, 00:08:42.786 "write_zeroes": true, 00:08:42.786 "zcopy": false, 00:08:42.786 "get_zone_info": false, 00:08:42.786 "zone_management": false, 00:08:42.786 "zone_append": false, 00:08:42.786 "compare": false, 00:08:42.786 "compare_and_write": false, 00:08:42.786 "abort": false, 00:08:42.786 "seek_hole": true, 00:08:42.786 "seek_data": true, 00:08:42.786 "copy": false, 00:08:42.786 "nvme_iov_md": false 00:08:42.786 }, 00:08:42.786 "driver_specific": { 00:08:42.786 "lvol": { 00:08:42.786 "lvol_store_uuid": "0d4890d1-9326-443c-b95e-60287f260eae", 00:08:42.786 "base_bdev": "aio_bdev", 00:08:42.786 "thin_provision": false, 00:08:42.786 "num_allocated_clusters": 38, 00:08:42.786 "snapshot": false, 00:08:42.786 "clone": false, 00:08:42.786 "esnap_clone": false 00:08:42.786 } 00:08:42.786 } 00:08:42.786 } 00:08:42.786 ] 00:08:42.786 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:42.786 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:42.786 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:42.786 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:42.786 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:42.786 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:43.045 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:43.045 15:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.304 [2024-12-06 15:25:49.092065] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.304 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:43.564 request: 00:08:43.564 { 00:08:43.564 "uuid": "0d4890d1-9326-443c-b95e-60287f260eae", 00:08:43.564 "method": "bdev_lvol_get_lvstores", 00:08:43.564 "req_id": 1 00:08:43.564 } 00:08:43.564 Got JSON-RPC error response 00:08:43.564 response: 00:08:43.564 { 00:08:43.564 "code": -19, 00:08:43.564 "message": "No such device" 00:08:43.564 } 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.564 aio_bdev 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9327241d-f275-45c4-8ec9-25572088aa3f 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9327241d-f275-45c4-8ec9-25572088aa3f 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.564 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:43.822 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9327241d-f275-45c4-8ec9-25572088aa3f -t 2000 00:08:44.080 [ 00:08:44.080 { 00:08:44.080 "name": "9327241d-f275-45c4-8ec9-25572088aa3f", 00:08:44.080 "aliases": [ 00:08:44.080 "lvs/lvol" 00:08:44.080 ], 00:08:44.080 "product_name": "Logical Volume", 00:08:44.080 "block_size": 4096, 00:08:44.080 "num_blocks": 38912, 00:08:44.080 "uuid": "9327241d-f275-45c4-8ec9-25572088aa3f", 00:08:44.080 "assigned_rate_limits": { 00:08:44.080 "rw_ios_per_sec": 0, 00:08:44.080 "rw_mbytes_per_sec": 0, 00:08:44.080 "r_mbytes_per_sec": 0, 00:08:44.080 "w_mbytes_per_sec": 0 00:08:44.080 }, 00:08:44.080 "claimed": false, 00:08:44.080 "zoned": false, 00:08:44.080 "supported_io_types": { 00:08:44.080 "read": true, 00:08:44.080 "write": true, 00:08:44.080 "unmap": true, 00:08:44.080 "flush": false, 00:08:44.080 "reset": true, 00:08:44.080 "nvme_admin": false, 00:08:44.080 "nvme_io": false, 00:08:44.080 "nvme_io_md": false, 00:08:44.080 "write_zeroes": true, 00:08:44.080 "zcopy": false, 00:08:44.080 "get_zone_info": false, 00:08:44.080 "zone_management": false, 00:08:44.080 "zone_append": false, 00:08:44.080 "compare": false, 00:08:44.080 "compare_and_write": false, 00:08:44.080 "abort": false, 00:08:44.080 "seek_hole": true, 00:08:44.080 "seek_data": true, 00:08:44.080 "copy": false, 00:08:44.080 "nvme_iov_md": false 00:08:44.080 }, 00:08:44.080 "driver_specific": { 00:08:44.080 "lvol": { 00:08:44.080 "lvol_store_uuid": "0d4890d1-9326-443c-b95e-60287f260eae", 00:08:44.080 "base_bdev": "aio_bdev", 00:08:44.080 "thin_provision": false, 00:08:44.080 "num_allocated_clusters": 38, 00:08:44.080 "snapshot": false, 00:08:44.080 "clone": false, 00:08:44.080 "esnap_clone": false 00:08:44.080 } 00:08:44.080 } 00:08:44.080 } 00:08:44.080 ] 00:08:44.080 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:44.080 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:44.080 15:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:44.080 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:44.080 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:44.080 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:44.338 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:44.338 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9327241d-f275-45c4-8ec9-25572088aa3f 00:08:44.596 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d4890d1-9326-443c-b95e-60287f260eae 00:08:44.855 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:44.855 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.114 00:08:45.114 real 0m17.580s 00:08:45.114 user 0m45.290s 00:08:45.114 sys 0m3.752s 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.114 ************************************ 00:08:45.114 END TEST lvs_grow_dirty 00:08:45.114 ************************************ 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:45.114 nvmf_trace.0 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.114 15:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.114 rmmod nvme_tcp 00:08:45.114 rmmod nvme_fabrics 00:08:45.114 rmmod nvme_keyring 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2867858 ']' 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2867858 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2867858 ']' 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2867858 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867858 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867858' 00:08:45.114 killing process with pid 2867858 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2867858 00:08:45.114 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2867858 00:08:45.372 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.373 15:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.907 00:08:47.907 real 0m42.525s 00:08:47.907 user 1m6.199s 00:08:47.907 sys 0m10.113s 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.907 ************************************ 00:08:47.907 END TEST nvmf_lvs_grow 00:08:47.907 ************************************ 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.907 ************************************ 00:08:47.907 START TEST nvmf_bdev_io_wait 00:08:47.907 ************************************ 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:47.907 * Looking for test storage... 00:08:47.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.907 --rc genhtml_branch_coverage=1 00:08:47.907 --rc genhtml_function_coverage=1 00:08:47.907 --rc genhtml_legend=1 00:08:47.907 --rc geninfo_all_blocks=1 00:08:47.907 --rc geninfo_unexecuted_blocks=1 00:08:47.907 00:08:47.907 ' 00:08:47.907 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.907 --rc genhtml_branch_coverage=1 00:08:47.907 --rc genhtml_function_coverage=1 00:08:47.907 --rc genhtml_legend=1 00:08:47.907 --rc geninfo_all_blocks=1 00:08:47.907 --rc geninfo_unexecuted_blocks=1 00:08:47.907 00:08:47.907 ' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.908 --rc genhtml_branch_coverage=1 00:08:47.908 --rc genhtml_function_coverage=1 00:08:47.908 --rc genhtml_legend=1 00:08:47.908 --rc geninfo_all_blocks=1 00:08:47.908 --rc geninfo_unexecuted_blocks=1 00:08:47.908 00:08:47.908 ' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.908 --rc genhtml_branch_coverage=1 00:08:47.908 --rc genhtml_function_coverage=1 00:08:47.908 --rc genhtml_legend=1 00:08:47.908 --rc geninfo_all_blocks=1 00:08:47.908 --rc geninfo_unexecuted_blocks=1 00:08:47.908 00:08:47.908 ' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.908 15:25:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:54.481 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:54.482 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:54.482 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:54.482 Found net devices under 0000:86:00.0: cvl_0_0 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:54.482 Found net devices under 0000:86:00.1: cvl_0_1 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:54.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:08:54.482 00:08:54.482 --- 10.0.0.2 ping statistics --- 00:08:54.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.482 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:08:54.482 00:08:54.482 --- 10.0.0.1 ping statistics --- 00:08:54.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.482 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.482 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2872139 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2872139 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2872139 ']' 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 [2024-12-06 15:25:59.621760] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:54.483 [2024-12-06 15:25:59.621812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.483 [2024-12-06 15:25:59.699082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.483 [2024-12-06 15:25:59.740454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.483 [2024-12-06 15:25:59.740495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.483 [2024-12-06 15:25:59.740502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.483 [2024-12-06 15:25:59.740509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.483 [2024-12-06 15:25:59.740515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.483 [2024-12-06 15:25:59.741942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.483 [2024-12-06 15:25:59.742049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.483 [2024-12-06 15:25:59.742137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.483 [2024-12-06 15:25:59.742138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 [2024-12-06 15:25:59.890391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 Malloc0 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.483 [2024-12-06 15:25:59.945451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2872167 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2872169 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.483 { 00:08:54.483 "params": { 00:08:54.483 "name": "Nvme$subsystem", 00:08:54.483 "trtype": "$TEST_TRANSPORT", 00:08:54.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.483 "adrfam": "ipv4", 00:08:54.483 "trsvcid": "$NVMF_PORT", 00:08:54.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.483 "hdgst": ${hdgst:-false}, 00:08:54.483 "ddgst": ${ddgst:-false} 00:08:54.483 }, 00:08:54.483 "method": "bdev_nvme_attach_controller" 00:08:54.483 } 00:08:54.483 EOF 00:08:54.483 )") 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2872171 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.483 { 00:08:54.483 "params": { 00:08:54.483 "name": "Nvme$subsystem", 00:08:54.483 "trtype": "$TEST_TRANSPORT", 00:08:54.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.483 "adrfam": "ipv4", 00:08:54.483 "trsvcid": "$NVMF_PORT", 00:08:54.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.483 "hdgst": ${hdgst:-false}, 00:08:54.483 "ddgst": ${ddgst:-false} 00:08:54.483 }, 00:08:54.483 "method": "bdev_nvme_attach_controller" 00:08:54.483 } 00:08:54.483 EOF 00:08:54.483 )") 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2872174 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:54.483 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.483 { 00:08:54.483 "params": { 00:08:54.483 "name": "Nvme$subsystem", 00:08:54.483 "trtype": "$TEST_TRANSPORT", 00:08:54.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.483 "adrfam": "ipv4", 00:08:54.483 "trsvcid": "$NVMF_PORT", 00:08:54.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.483 "hdgst": ${hdgst:-false}, 00:08:54.483 "ddgst": ${ddgst:-false} 00:08:54.483 }, 00:08:54.483 "method": "bdev_nvme_attach_controller" 00:08:54.484 } 00:08:54.484 EOF 00:08:54.484 )") 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:54.484 { 00:08:54.484 "params": { 00:08:54.484 "name": "Nvme$subsystem", 00:08:54.484 "trtype": "$TEST_TRANSPORT", 00:08:54.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.484 "adrfam": "ipv4", 00:08:54.484 "trsvcid": "$NVMF_PORT", 00:08:54.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.484 "hdgst": ${hdgst:-false}, 00:08:54.484 "ddgst": ${ddgst:-false} 00:08:54.484 }, 00:08:54.484 "method": "bdev_nvme_attach_controller" 00:08:54.484 } 00:08:54.484 EOF 00:08:54.484 )") 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2872167 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.484 "params": { 00:08:54.484 "name": "Nvme1", 00:08:54.484 "trtype": "tcp", 00:08:54.484 "traddr": "10.0.0.2", 00:08:54.484 "adrfam": "ipv4", 00:08:54.484 "trsvcid": "4420", 00:08:54.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.484 "hdgst": false, 00:08:54.484 "ddgst": false 00:08:54.484 }, 00:08:54.484 "method": "bdev_nvme_attach_controller" 00:08:54.484 }' 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.484 "params": { 00:08:54.484 "name": "Nvme1", 00:08:54.484 "trtype": "tcp", 00:08:54.484 "traddr": "10.0.0.2", 00:08:54.484 "adrfam": "ipv4", 00:08:54.484 "trsvcid": "4420", 00:08:54.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.484 "hdgst": false, 00:08:54.484 "ddgst": false 00:08:54.484 }, 00:08:54.484 "method": "bdev_nvme_attach_controller" 00:08:54.484 }' 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.484 "params": { 00:08:54.484 "name": "Nvme1", 00:08:54.484 "trtype": "tcp", 00:08:54.484 "traddr": "10.0.0.2", 00:08:54.484 "adrfam": "ipv4", 00:08:54.484 "trsvcid": "4420", 00:08:54.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.484 "hdgst": false, 00:08:54.484 "ddgst": false 00:08:54.484 }, 00:08:54.484 "method": "bdev_nvme_attach_controller" 00:08:54.484 }' 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:54.484 15:25:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:54.484 "params": { 00:08:54.484 "name": "Nvme1", 00:08:54.484 "trtype": "tcp", 00:08:54.484 "traddr": "10.0.0.2", 00:08:54.484 "adrfam": "ipv4", 00:08:54.484 "trsvcid": "4420", 00:08:54.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:54.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:54.484 "hdgst": false, 00:08:54.484 "ddgst": false 00:08:54.484 }, 00:08:54.484 "method": "bdev_nvme_attach_controller" 00:08:54.484 }' 00:08:54.484 [2024-12-06 15:25:59.998364] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:54.484 [2024-12-06 15:25:59.998422] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:54.484 [2024-12-06 15:25:59.998450] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:54.484 [2024-12-06 15:25:59.998454] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:54.484 [2024-12-06 15:25:59.998491] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 15:25:59.998491] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:54.484 --proc-type=auto ] 00:08:54.484 [2024-12-06 15:26:00.000023] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:08:54.484 [2024-12-06 15:26:00.000072] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:54.484 [2024-12-06 15:26:00.195561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.484 [2024-12-06 15:26:00.238481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:54.484 [2024-12-06 15:26:00.286756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.484 [2024-12-06 15:26:00.327065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:54.484 [2024-12-06 15:26:00.388852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.484 [2024-12-06 15:26:00.439705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.484 [2024-12-06 15:26:00.442700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.742 [2024-12-06 15:26:00.483031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:54.742 Running I/O for 1 seconds... 00:08:54.742 Running I/O for 1 seconds... 00:08:54.742 Running I/O for 1 seconds... 00:08:54.999 Running I/O for 1 seconds... 00:08:55.933 243728.00 IOPS, 952.06 MiB/s 00:08:55.933 Latency(us) 00:08:55.933 [2024-12-06T14:26:01.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.934 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:55.934 Nvme1n1 : 1.00 243354.66 950.60 0.00 0.00 523.54 224.30 1521.37 00:08:55.934 [2024-12-06T14:26:01.932Z] =================================================================================================================== 00:08:55.934 [2024-12-06T14:26:01.932Z] Total : 243354.66 950.60 0.00 0.00 523.54 224.30 1521.37 00:08:55.934 11728.00 IOPS, 45.81 MiB/s [2024-12-06T14:26:01.932Z] 11271.00 IOPS, 44.03 MiB/s 00:08:55.934 Latency(us) 00:08:55.934 [2024-12-06T14:26:01.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.934 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:55.934 Nvme1n1 : 1.01 11331.99 44.27 0.00 0.00 11256.78 5055.63 20097.71 00:08:55.934 [2024-12-06T14:26:01.932Z] =================================================================================================================== 00:08:55.934 [2024-12-06T14:26:01.932Z] Total : 11331.99 44.27 0.00 0.00 11256.78 5055.63 20097.71 00:08:55.934 00:08:55.934 Latency(us) 00:08:55.934 [2024-12-06T14:26:01.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.934 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:55.934 Nvme1n1 : 1.01 11785.23 46.04 0.00 0.00 10821.51 5648.58 20472.20 00:08:55.934 [2024-12-06T14:26:01.932Z] =================================================================================================================== 00:08:55.934 [2024-12-06T14:26:01.932Z] Total : 11785.23 46.04 0.00 0.00 10821.51 5648.58 20472.20 00:08:55.934 10072.00 IOPS, 39.34 MiB/s 00:08:55.934 Latency(us) 00:08:55.934 [2024-12-06T14:26:01.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.934 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:55.934 Nvme1n1 : 1.01 10144.38 39.63 0.00 0.00 12577.81 4618.73 23093.64 00:08:55.934 [2024-12-06T14:26:01.932Z] =================================================================================================================== 00:08:55.934 [2024-12-06T14:26:01.932Z] Total : 10144.38 39.63 0.00 0.00 12577.81 4618.73 23093.64 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2872169 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2872171 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2872174 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.934 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.934 rmmod nvme_tcp 00:08:56.193 rmmod nvme_fabrics 00:08:56.193 rmmod nvme_keyring 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2872139 ']' 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2872139 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2872139 ']' 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2872139 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.193 15:26:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2872139 00:08:56.193 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.193 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.193 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2872139' 00:08:56.193 killing process with pid 2872139 00:08:56.193 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2872139 00:08:56.193 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2872139 00:08:56.452 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.452 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.452 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.452 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.453 15:26:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.358 00:08:58.358 real 0m10.887s 00:08:58.358 user 0m16.769s 00:08:58.358 sys 0m6.275s 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.358 ************************************ 00:08:58.358 END TEST nvmf_bdev_io_wait 00:08:58.358 ************************************ 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.358 ************************************ 00:08:58.358 START TEST nvmf_queue_depth 00:08:58.358 ************************************ 00:08:58.358 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:58.618 * Looking for test storage... 00:08:58.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.618 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.619 --rc genhtml_branch_coverage=1 00:08:58.619 --rc genhtml_function_coverage=1 00:08:58.619 --rc genhtml_legend=1 00:08:58.619 --rc geninfo_all_blocks=1 00:08:58.619 --rc geninfo_unexecuted_blocks=1 00:08:58.619 00:08:58.619 ' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.619 --rc genhtml_branch_coverage=1 00:08:58.619 --rc genhtml_function_coverage=1 00:08:58.619 --rc genhtml_legend=1 00:08:58.619 --rc geninfo_all_blocks=1 00:08:58.619 --rc geninfo_unexecuted_blocks=1 00:08:58.619 00:08:58.619 ' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.619 --rc genhtml_branch_coverage=1 00:08:58.619 --rc genhtml_function_coverage=1 00:08:58.619 --rc genhtml_legend=1 00:08:58.619 --rc geninfo_all_blocks=1 00:08:58.619 --rc geninfo_unexecuted_blocks=1 00:08:58.619 00:08:58.619 ' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.619 --rc genhtml_branch_coverage=1 00:08:58.619 --rc genhtml_function_coverage=1 00:08:58.619 --rc genhtml_legend=1 00:08:58.619 --rc geninfo_all_blocks=1 00:08:58.619 --rc geninfo_unexecuted_blocks=1 00:08:58.619 00:08:58.619 ' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.619 15:26:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:05.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:05.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:05.189 Found net devices under 0000:86:00.0: cvl_0_0 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:05.189 Found net devices under 0000:86:00.1: cvl_0_1 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.189 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:09:05.190 00:09:05.190 --- 10.0.0.2 ping statistics --- 00:09:05.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.190 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:09:05.190 00:09:05.190 --- 10.0.0.1 ping statistics --- 00:09:05.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.190 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2876176 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2876176 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2876176 ']' 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 [2024-12-06 15:26:10.627641] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:09:05.190 [2024-12-06 15:26:10.627685] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.190 [2024-12-06 15:26:10.706722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.190 [2024-12-06 15:26:10.748156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.190 [2024-12-06 15:26:10.748190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.190 [2024-12-06 15:26:10.748197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.190 [2024-12-06 15:26:10.748204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.190 [2024-12-06 15:26:10.748209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.190 [2024-12-06 15:26:10.748685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 [2024-12-06 15:26:10.881328] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 Malloc0 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 [2024-12-06 15:26:10.931715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2876205 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2876205 /var/tmp/bdevperf.sock 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2876205 ']' 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:05.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.190 15:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.190 [2024-12-06 15:26:10.984652] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:09:05.190 [2024-12-06 15:26:10.984698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876205 ] 00:09:05.190 [2024-12-06 15:26:11.059561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.190 [2024-12-06 15:26:11.099871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.449 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.449 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:05.449 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:05.449 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.449 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:05.449 NVMe0n1 00:09:05.449 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.449 15:26:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:05.449 Running I/O for 10 seconds... 00:09:07.774 11914.00 IOPS, 46.54 MiB/s [2024-12-06T14:26:14.703Z] 12265.50 IOPS, 47.91 MiB/s [2024-12-06T14:26:15.636Z] 12276.33 IOPS, 47.95 MiB/s [2024-12-06T14:26:16.572Z] 12281.75 IOPS, 47.98 MiB/s [2024-12-06T14:26:17.507Z] 12332.80 IOPS, 48.17 MiB/s [2024-12-06T14:26:18.444Z] 12352.67 IOPS, 48.25 MiB/s [2024-12-06T14:26:19.821Z] 12415.29 IOPS, 48.50 MiB/s [2024-12-06T14:26:20.760Z] 12402.88 IOPS, 48.45 MiB/s [2024-12-06T14:26:21.697Z] 12395.22 IOPS, 48.42 MiB/s [2024-12-06T14:26:21.697Z] 12404.20 IOPS, 48.45 MiB/s 00:09:15.699 Latency(us) 00:09:15.699 [2024-12-06T14:26:21.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.699 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:15.699 Verification LBA range: start 0x0 length 0x4000 00:09:15.699 NVMe0n1 : 10.05 12443.12 48.61 0.00 0.00 82008.50 9424.70 52179.14 00:09:15.699 [2024-12-06T14:26:21.697Z] =================================================================================================================== 00:09:15.699 [2024-12-06T14:26:21.697Z] Total : 12443.12 48.61 0.00 0.00 82008.50 9424.70 52179.14 00:09:15.699 { 00:09:15.699 "results": [ 00:09:15.699 { 00:09:15.699 "job": "NVMe0n1", 00:09:15.699 "core_mask": "0x1", 00:09:15.699 "workload": "verify", 00:09:15.699 "status": "finished", 00:09:15.699 "verify_range": { 00:09:15.699 "start": 0, 00:09:15.699 "length": 16384 00:09:15.699 }, 00:09:15.699 "queue_depth": 1024, 00:09:15.699 "io_size": 4096, 00:09:15.699 "runtime": 10.048283, 00:09:15.699 "iops": 12443.120879457714, 00:09:15.699 "mibps": 48.605940935381696, 00:09:15.699 "io_failed": 0, 00:09:15.699 "io_timeout": 0, 00:09:15.699 "avg_latency_us": 82008.5021239515, 00:09:15.699 "min_latency_us": 9424.700952380952, 00:09:15.699 "max_latency_us": 52179.13904761905 00:09:15.699 } 00:09:15.699 ], 00:09:15.699 "core_count": 1 00:09:15.699 } 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2876205 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2876205 ']' 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2876205 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2876205 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2876205' 00:09:15.699 killing process with pid 2876205 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2876205 00:09:15.699 Received shutdown signal, test time was about 10.000000 seconds 00:09:15.699 00:09:15.699 Latency(us) 00:09:15.699 [2024-12-06T14:26:21.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.699 [2024-12-06T14:26:21.697Z] =================================================================================================================== 00:09:15.699 [2024-12-06T14:26:21.697Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:15.699 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2876205 00:09:15.958 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:15.958 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:15.958 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.958 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.959 rmmod nvme_tcp 00:09:15.959 rmmod nvme_fabrics 00:09:15.959 rmmod nvme_keyring 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2876176 ']' 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2876176 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2876176 ']' 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2876176 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2876176 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2876176' 00:09:15.959 killing process with pid 2876176 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2876176 00:09:15.959 15:26:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2876176 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.218 15:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.125 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.125 00:09:18.125 real 0m19.763s 00:09:18.125 user 0m22.955s 00:09:18.125 sys 0m6.183s 00:09:18.125 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.125 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.125 ************************************ 00:09:18.125 END TEST nvmf_queue_depth 00:09:18.125 ************************************ 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.386 ************************************ 00:09:18.386 START TEST nvmf_target_multipath 00:09:18.386 ************************************ 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:18.386 * Looking for test storage... 00:09:18.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:18.386 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:18.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.387 --rc genhtml_branch_coverage=1 00:09:18.387 --rc genhtml_function_coverage=1 00:09:18.387 --rc genhtml_legend=1 00:09:18.387 --rc geninfo_all_blocks=1 00:09:18.387 --rc geninfo_unexecuted_blocks=1 00:09:18.387 00:09:18.387 ' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:18.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.387 --rc genhtml_branch_coverage=1 00:09:18.387 --rc genhtml_function_coverage=1 00:09:18.387 --rc genhtml_legend=1 00:09:18.387 --rc geninfo_all_blocks=1 00:09:18.387 --rc geninfo_unexecuted_blocks=1 00:09:18.387 00:09:18.387 ' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:18.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.387 --rc genhtml_branch_coverage=1 00:09:18.387 --rc genhtml_function_coverage=1 00:09:18.387 --rc genhtml_legend=1 00:09:18.387 --rc geninfo_all_blocks=1 00:09:18.387 --rc geninfo_unexecuted_blocks=1 00:09:18.387 00:09:18.387 ' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:18.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.387 --rc genhtml_branch_coverage=1 00:09:18.387 --rc genhtml_function_coverage=1 00:09:18.387 --rc genhtml_legend=1 00:09:18.387 --rc geninfo_all_blocks=1 00:09:18.387 --rc geninfo_unexecuted_blocks=1 00:09:18.387 00:09:18.387 ' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.387 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.646 15:26:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:25.213 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:25.213 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:25.213 Found net devices under 0000:86:00.0: cvl_0_0 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:25.213 Found net devices under 0000:86:00.1: cvl_0_1 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.213 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:09:25.214 00:09:25.214 --- 10.0.0.2 ping statistics --- 00:09:25.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.214 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:09:25.214 00:09:25.214 --- 10.0.0.1 ping statistics --- 00:09:25.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.214 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:25.214 only one NIC for nvmf test 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.214 rmmod nvme_tcp 00:09:25.214 rmmod nvme_fabrics 00:09:25.214 rmmod nvme_keyring 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.214 15:26:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.593 00:09:26.593 real 0m8.408s 00:09:26.593 user 0m1.795s 00:09:26.593 sys 0m4.621s 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.593 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.593 ************************************ 00:09:26.593 END TEST nvmf_target_multipath 00:09:26.593 ************************************ 00:09:26.852 15:26:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.852 15:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.852 15:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.852 15:26:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.852 ************************************ 00:09:26.852 START TEST nvmf_zcopy 00:09:26.852 ************************************ 00:09:26.852 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:26.852 * Looking for test storage... 00:09:26.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.852 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.853 --rc genhtml_branch_coverage=1 00:09:26.853 --rc genhtml_function_coverage=1 00:09:26.853 --rc genhtml_legend=1 00:09:26.853 --rc geninfo_all_blocks=1 00:09:26.853 --rc geninfo_unexecuted_blocks=1 00:09:26.853 00:09:26.853 ' 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.853 --rc genhtml_branch_coverage=1 00:09:26.853 --rc genhtml_function_coverage=1 00:09:26.853 --rc genhtml_legend=1 00:09:26.853 --rc geninfo_all_blocks=1 00:09:26.853 --rc geninfo_unexecuted_blocks=1 00:09:26.853 00:09:26.853 ' 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.853 --rc genhtml_branch_coverage=1 00:09:26.853 --rc genhtml_function_coverage=1 00:09:26.853 --rc genhtml_legend=1 00:09:26.853 --rc geninfo_all_blocks=1 00:09:26.853 --rc geninfo_unexecuted_blocks=1 00:09:26.853 00:09:26.853 ' 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.853 --rc genhtml_branch_coverage=1 00:09:26.853 --rc genhtml_function_coverage=1 00:09:26.853 --rc genhtml_legend=1 00:09:26.853 --rc geninfo_all_blocks=1 00:09:26.853 --rc geninfo_unexecuted_blocks=1 00:09:26.853 00:09:26.853 ' 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.853 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.113 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.114 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.114 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.114 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.114 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.114 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.114 15:26:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.686 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:33.687 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:33.687 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:33.687 Found net devices under 0000:86:00.0: cvl_0_0 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:33.687 Found net devices under 0000:86:00.1: cvl_0_1 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:09:33.687 00:09:33.687 --- 10.0.0.2 ping statistics --- 00:09:33.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.687 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:09:33.687 00:09:33.687 --- 10.0.0.1 ping statistics --- 00:09:33.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.687 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2885102 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2885102 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2885102 ']' 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.687 15:26:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.687 [2024-12-06 15:26:38.914570] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:09:33.687 [2024-12-06 15:26:38.914612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.687 [2024-12-06 15:26:38.992094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.687 [2024-12-06 15:26:39.030023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.688 [2024-12-06 15:26:39.030059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.688 [2024-12-06 15:26:39.030067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.688 [2024-12-06 15:26:39.030073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.688 [2024-12-06 15:26:39.030078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.688 [2024-12-06 15:26:39.030647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.688 [2024-12-06 15:26:39.179244] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.688 [2024-12-06 15:26:39.199447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.688 malloc0 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:33.688 { 00:09:33.688 "params": { 00:09:33.688 "name": "Nvme$subsystem", 00:09:33.688 "trtype": "$TEST_TRANSPORT", 00:09:33.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.688 "adrfam": "ipv4", 00:09:33.688 "trsvcid": "$NVMF_PORT", 00:09:33.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.688 "hdgst": ${hdgst:-false}, 00:09:33.688 "ddgst": ${ddgst:-false} 00:09:33.688 }, 00:09:33.688 "method": "bdev_nvme_attach_controller" 00:09:33.688 } 00:09:33.688 EOF 00:09:33.688 )") 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:33.688 15:26:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:33.688 "params": { 00:09:33.688 "name": "Nvme1", 00:09:33.688 "trtype": "tcp", 00:09:33.688 "traddr": "10.0.0.2", 00:09:33.688 "adrfam": "ipv4", 00:09:33.688 "trsvcid": "4420", 00:09:33.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.688 "hdgst": false, 00:09:33.688 "ddgst": false 00:09:33.688 }, 00:09:33.688 "method": "bdev_nvme_attach_controller" 00:09:33.688 }' 00:09:33.688 [2024-12-06 15:26:39.278077] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:09:33.688 [2024-12-06 15:26:39.278119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885125 ] 00:09:33.688 [2024-12-06 15:26:39.351786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.688 [2024-12-06 15:26:39.392385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.688 Running I/O for 10 seconds... 00:09:35.999 8615.00 IOPS, 67.30 MiB/s [2024-12-06T14:26:42.933Z] 8673.50 IOPS, 67.76 MiB/s [2024-12-06T14:26:43.866Z] 8717.00 IOPS, 68.10 MiB/s [2024-12-06T14:26:44.800Z] 8752.50 IOPS, 68.38 MiB/s [2024-12-06T14:26:45.847Z] 8774.80 IOPS, 68.55 MiB/s [2024-12-06T14:26:46.779Z] 8792.33 IOPS, 68.69 MiB/s [2024-12-06T14:26:47.715Z] 8801.71 IOPS, 68.76 MiB/s [2024-12-06T14:26:48.652Z] 8801.12 IOPS, 68.76 MiB/s [2024-12-06T14:26:50.030Z] 8809.67 IOPS, 68.83 MiB/s [2024-12-06T14:26:50.030Z] 8814.60 IOPS, 68.86 MiB/s 00:09:44.032 Latency(us) 00:09:44.032 [2024-12-06T14:26:50.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.032 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:44.032 Verification LBA range: start 0x0 length 0x1000 00:09:44.032 Nvme1n1 : 10.01 8817.27 68.88 0.00 0.00 14475.53 2278.16 24092.28 00:09:44.032 [2024-12-06T14:26:50.030Z] =================================================================================================================== 00:09:44.032 [2024-12-06T14:26:50.030Z] Total : 8817.27 68.88 0.00 0.00 14475.53 2278.16 24092.28 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2886967 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.032 { 00:09:44.032 "params": { 00:09:44.032 "name": "Nvme$subsystem", 00:09:44.032 "trtype": "$TEST_TRANSPORT", 00:09:44.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.032 "adrfam": "ipv4", 00:09:44.032 "trsvcid": "$NVMF_PORT", 00:09:44.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.032 "hdgst": ${hdgst:-false}, 00:09:44.032 "ddgst": ${ddgst:-false} 00:09:44.032 }, 00:09:44.032 "method": "bdev_nvme_attach_controller" 00:09:44.032 } 00:09:44.032 EOF 00:09:44.032 )") 00:09:44.032 [2024-12-06 15:26:49.797958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.797993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:44.032 15:26:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.032 "params": { 00:09:44.032 "name": "Nvme1", 00:09:44.032 "trtype": "tcp", 00:09:44.032 "traddr": "10.0.0.2", 00:09:44.032 "adrfam": "ipv4", 00:09:44.032 "trsvcid": "4420", 00:09:44.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.032 "hdgst": false, 00:09:44.032 "ddgst": false 00:09:44.032 }, 00:09:44.032 "method": "bdev_nvme_attach_controller" 00:09:44.032 }' 00:09:44.032 [2024-12-06 15:26:49.809948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.809962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.821976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.821986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.834005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.834014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.838457] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:09:44.032 [2024-12-06 15:26:49.838497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886967 ] 00:09:44.032 [2024-12-06 15:26:49.846039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.846049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.858068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.858077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.870100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.870109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.882133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.882142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.894161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.894170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.906198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.906206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.912684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.032 [2024-12-06 15:26:49.918228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.918237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.930261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.930276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.942291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.942302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.954057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.032 [2024-12-06 15:26:49.954333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.954350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.966362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.966386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.978395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.978414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:49.990425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:49.990439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:50.002457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:50.002470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:50.014514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:50.014537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.032 [2024-12-06 15:26:50.026519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.032 [2024-12-06 15:26:50.026531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.290 [2024-12-06 15:26:50.038554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.038567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.050599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.050621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.062625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.062638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.074655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.074667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.086682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.086691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.098715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.098726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.110753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.110766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.122791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.122806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.134821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.134833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.146862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.146880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 Running I/O for 5 seconds... 00:09:44.291 [2024-12-06 15:26:50.158885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.158895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.174586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.174612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.188744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.188763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.202740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.202764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.216813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.216832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.230513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.230531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.244024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.244042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.253391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.253409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.262764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.262782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.272183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.272201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.291 [2024-12-06 15:26:50.286563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.291 [2024-12-06 15:26:50.286581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.300171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.300190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.314139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.314158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.323212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.323231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.337067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.337085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.350724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.350743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.364581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.364599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.374161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.374180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.383279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.383297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.392595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.392614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.406789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.406807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.420482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.420500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.434290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.434308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.448797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.448816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.459524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.459543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.474407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.474425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.488126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.488144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.502179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.502198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.511138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.511155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.525563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.525582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.550 [2024-12-06 15:26:50.539503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.550 [2024-12-06 15:26:50.539522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.553166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.553184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.566840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.566858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.580814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.580832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.590164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.590183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.604560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.604582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.618029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.618047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.631825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.631845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.645282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.645301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.658992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.659012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.672285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.672305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.685728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.685748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.699525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.699544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.708213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.708232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.722042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.722063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.734995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.735014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.748937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.748957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.762464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.762485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.776075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.776095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.789479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.789499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.809 [2024-12-06 15:26:50.803457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.809 [2024-12-06 15:26:50.803475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.817207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.817226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.830989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.831008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.844670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.844690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.853438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.853457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.868116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.868135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.884205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.884225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.897695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.897714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.911239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.911258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.925114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.925133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.938583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.938602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.952073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.952093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.966121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.966140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.979489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.979509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:50.993385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:50.993404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:51.007174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:51.007194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:51.015922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:51.015940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:51.030070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:51.030089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:51.043592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:51.043611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.069 [2024-12-06 15:26:51.057312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.069 [2024-12-06 15:26:51.057330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.328 [2024-12-06 15:26:51.071435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.328 [2024-12-06 15:26:51.071453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.085384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.085403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.098584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.098602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.112768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.112786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.126765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.126783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.140199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.140218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.154405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.154423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 16979.00 IOPS, 132.65 MiB/s [2024-12-06T14:26:51.327Z] [2024-12-06 15:26:51.168269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.168287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.182089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.182111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.195903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.195922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.209114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.209133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.223053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.223072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.236454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.236472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.250062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.250081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.263243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.263261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.277164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.277182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.290889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.290908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.300221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.300239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.309462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.309480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.329 [2024-12-06 15:26:51.318825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.329 [2024-12-06 15:26:51.318842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.333419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.333437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.343939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.343958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.358354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.358378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.371984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.372003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.385652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.385670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.399186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.399204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.408091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.408109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.422216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.422238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.435828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.435846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.445354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.445380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.459255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.459273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.468147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.468164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.482071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.482090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.490801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.490819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.500228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.500246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.514239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.514257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.527747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.527765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.588 [2024-12-06 15:26:51.541093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.588 [2024-12-06 15:26:51.541112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.589 [2024-12-06 15:26:51.554852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.589 [2024-12-06 15:26:51.554870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.589 [2024-12-06 15:26:51.563697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.589 [2024-12-06 15:26:51.563715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.589 [2024-12-06 15:26:51.577172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.589 [2024-12-06 15:26:51.577189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.590921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.590939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.605419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.605438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.620783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.620802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.630270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.630288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.644642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.644660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.658338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.658361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.671760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.671778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.685250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.685268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.698806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.698824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.712059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.712077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.720788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.720805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.730048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.730066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.744348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.744366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.757653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.757671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.770786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.770804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.784289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.784308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.797965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.797984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.811407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.811425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.825306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.825324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.848 [2024-12-06 15:26:51.838882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.848 [2024-12-06 15:26:51.838900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.852581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.852610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.861593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.861611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.875649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.875668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.888941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.888959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.902530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.902552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.916432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.916451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.930195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.930214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.943591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.943609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.956913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.956931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.970616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.970635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.984099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.984117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:51.997644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:51.997662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:52.006543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:52.006561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:52.020469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:52.020489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:52.034023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:52.034041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:52.047648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:52.047668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:52.062155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.108 [2024-12-06 15:26:52.062174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.108 [2024-12-06 15:26:52.076092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.109 [2024-12-06 15:26:52.076112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.109 [2024-12-06 15:26:52.089542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.109 [2024-12-06 15:26:52.089562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.109 [2024-12-06 15:26:52.102828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.109 [2024-12-06 15:26:52.102849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.368 [2024-12-06 15:26:52.117052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.368 [2024-12-06 15:26:52.117072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.368 [2024-12-06 15:26:52.131645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.368 [2024-12-06 15:26:52.131665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.368 [2024-12-06 15:26:52.147141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.368 [2024-12-06 15:26:52.147160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.368 [2024-12-06 15:26:52.160777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.368 17074.00 IOPS, 133.39 MiB/s [2024-12-06T14:26:52.366Z] [2024-12-06 15:26:52.160796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.368 [2024-12-06 15:26:52.174449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.368 [2024-12-06 15:26:52.174468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.368 [2024-12-06 15:26:52.188010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.368 [2024-12-06 15:26:52.188028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.202137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.202156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.215424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.215443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.229296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.229315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.242425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.242445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.255636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.255654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.269108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.269127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.282448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.282467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.295757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.295775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.309083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.309101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.322079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.322098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.330745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.330762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.340359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.340383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.369 [2024-12-06 15:26:52.354969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.369 [2024-12-06 15:26:52.354987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.628 [2024-12-06 15:26:52.368788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.628 [2024-12-06 15:26:52.368807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.628 [2024-12-06 15:26:52.382466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.628 [2024-12-06 15:26:52.382484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.628 [2024-12-06 15:26:52.396381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.396399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.406980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.406998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.421275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.421294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.434973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.434991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.448535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.448554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.461916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.461933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.475840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.475858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.484741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.484759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.498577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.498595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.512087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.512105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.525531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.525549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.539506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.539524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.553278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.553295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.567150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.567167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.581318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.581335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.590785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.590803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.600124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.600141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.629 [2024-12-06 15:26:52.614184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.629 [2024-12-06 15:26:52.614202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.627741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.627759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.641497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.641519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.655052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.655069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.663794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.663812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.677918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.677937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.691286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.691304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.700148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.700166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.709550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.709568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.723963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.723982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.737525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.737543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.751220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.751238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.765059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.765077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.778998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.779017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.792657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.792676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.806287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.806307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.820116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.820134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.833570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.833589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.847223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.847241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.860973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.860991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.888 [2024-12-06 15:26:52.874953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.888 [2024-12-06 15:26:52.874971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.888811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.888834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.902213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.902232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.915885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.915903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.929098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.929116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.943022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.943040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.956731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.956748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.970901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.970920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.984289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.984307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:52.993197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:52.993215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.147 [2024-12-06 15:26:53.007722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.147 [2024-12-06 15:26:53.007741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.021221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.021239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.035383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.035401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.046401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.046419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.060274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.060292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.074339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.074357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.085503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.085520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.099438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.099456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.112282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.112300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.126404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.126421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.148 [2024-12-06 15:26:53.139805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.148 [2024-12-06 15:26:53.139828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.153547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.153565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 17128.00 IOPS, 133.81 MiB/s [2024-12-06T14:26:53.404Z] [2024-12-06 15:26:53.166806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.166825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.180651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.180670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.189506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.189524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.198828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.198846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.212913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.212931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.226556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.226575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.236133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.236151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.250116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.250134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.263782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.263802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.277196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.406 [2024-12-06 15:26:53.277215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.406 [2024-12-06 15:26:53.290717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.290735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.299700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.299719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.308440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.308458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.322849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.322868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.336057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.336075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.345634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.345653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.359345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.359362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.373006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.373024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.386717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.386735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.407 [2024-12-06 15:26:53.400177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.407 [2024-12-06 15:26:53.400196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.665 [2024-12-06 15:26:53.413531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.665 [2024-12-06 15:26:53.413550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.665 [2024-12-06 15:26:53.427356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.427383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.436167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.436187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.450256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.450276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.463972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.463991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.478152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.478172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.491878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.491896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.505492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.505511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.519251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.519270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.532688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.532707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.546363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.546388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.559714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.559733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.573244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.573263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.586412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.586431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.599822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.599841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.613164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.613184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.626828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.626847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.640693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.640711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.666 [2024-12-06 15:26:53.649724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.666 [2024-12-06 15:26:53.649743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.664012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.664031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.676858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.676876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.690638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.690657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.704272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.704291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.717729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.717747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.731268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.731288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.744826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.744845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.754302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.754320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.768335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.768354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.782154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.782172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.795805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.795823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.809298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.809315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.822860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.822878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.831631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.831651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.840943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.840962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.925 [2024-12-06 15:26:53.855158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.925 [2024-12-06 15:26:53.855176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.926 [2024-12-06 15:26:53.868959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.926 [2024-12-06 15:26:53.868978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.926 [2024-12-06 15:26:53.882919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.926 [2024-12-06 15:26:53.882937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.926 [2024-12-06 15:26:53.896619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.926 [2024-12-06 15:26:53.896637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.926 [2024-12-06 15:26:53.910290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.926 [2024-12-06 15:26:53.910308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:53.924082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:53.924101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:53.933165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:53.933183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:53.947594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:53.947613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:53.960724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:53.960741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:53.975003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:53.975021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:53.988725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:53.988744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.002438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.002456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.016105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.016124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.029425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.029443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.043225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.043243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.057037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.057056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.070961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.070980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.084476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.084495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.097845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.097865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.111441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.111459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.124551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.124570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.138209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.138228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.151623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.151641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 [2024-12-06 15:26:54.165515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.165534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.185 17148.75 IOPS, 133.97 MiB/s [2024-12-06T14:26:54.183Z] [2024-12-06 15:26:54.179339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.185 [2024-12-06 15:26:54.179357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.193013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.193031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.206854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.206872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.219979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.219998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.234237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.234256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.247614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.247632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.261098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.261117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.274317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.274336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.288111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.288130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.301789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.301807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.315244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.315262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.329073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.329091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.342812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.342831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.356341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.356359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.370107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.444 [2024-12-06 15:26:54.370132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.444 [2024-12-06 15:26:54.383203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.445 [2024-12-06 15:26:54.383220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.445 [2024-12-06 15:26:54.397316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.445 [2024-12-06 15:26:54.397333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.445 [2024-12-06 15:26:54.410890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.445 [2024-12-06 15:26:54.410908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.445 [2024-12-06 15:26:54.424585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.445 [2024-12-06 15:26:54.424603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.445 [2024-12-06 15:26:54.438243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.445 [2024-12-06 15:26:54.438262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.703 [2024-12-06 15:26:54.452214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.703 [2024-12-06 15:26:54.452233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.465821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.465840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.479470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.479488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.493435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.493455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.502303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.502320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.516465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.516483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.530017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.530035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.543534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.543553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.557260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.557277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.570585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.570603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.579512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.579530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.593427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.593446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.606808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.606826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.620381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.620404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.633750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.633768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.647288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.647307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.660916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.660934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.674234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.674251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.704 [2024-12-06 15:26:54.687807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.704 [2024-12-06 15:26:54.687825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.962 [2024-12-06 15:26:54.701325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.962 [2024-12-06 15:26:54.701343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.962 [2024-12-06 15:26:54.714826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.962 [2024-12-06 15:26:54.714843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.962 [2024-12-06 15:26:54.728504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.962 [2024-12-06 15:26:54.728522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.962 [2024-12-06 15:26:54.742178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.962 [2024-12-06 15:26:54.742197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.962 [2024-12-06 15:26:54.755676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.962 [2024-12-06 15:26:54.755693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.962 [2024-12-06 15:26:54.769301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.962 [2024-12-06 15:26:54.769320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.962 [2024-12-06 15:26:54.782920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.962 [2024-12-06 15:26:54.782939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.796951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.796970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.810618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.810638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.824514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.824533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.838473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.838494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.851855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.851874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.865521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.865541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.879204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.879228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.893579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.893598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.904961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.904980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.918644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.918664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.932743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.932762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.944134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.944151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.963 [2024-12-06 15:26:54.957839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.963 [2024-12-06 15:26:54.957858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:54.971288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:54.971307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:54.984887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:54.984907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:54.998384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:54.998404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.011966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.011984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.025464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.025484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.039599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.039618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.050721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.050739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.064746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.064765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.078176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.078194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.091661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.091680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.105324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.105342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.118711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.118730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.132134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.132153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.145764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.145784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.159411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.159430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 17164.20 IOPS, 134.10 MiB/s [2024-12-06T14:26:55.220Z] [2024-12-06 15:26:55.171894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.171913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 00:09:49.222 Latency(us) 00:09:49.222 [2024-12-06T14:26:55.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.222 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:49.222 Nvme1n1 : 5.01 17165.73 134.11 0.00 0.00 7449.36 3292.40 16727.28 00:09:49.222 [2024-12-06T14:26:55.220Z] =================================================================================================================== 00:09:49.222 [2024-12-06T14:26:55.220Z] Total : 17165.73 134.11 0.00 0.00 7449.36 3292.40 16727.28 00:09:49.222 [2024-12-06 15:26:55.181559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.181575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.193587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.193600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.205632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.205650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.222 [2024-12-06 15:26:55.217658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.222 [2024-12-06 15:26:55.217673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.229694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.229708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.241730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.241743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.253764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.253780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.265800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.265815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.277832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.277847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.289870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.289880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.301895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.301906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.313925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.313938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 [2024-12-06 15:26:55.325956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.481 [2024-12-06 15:26:55.325967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2886967) - No such process 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2886967 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.481 delay0 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.481 15:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:49.740 [2024-12-06 15:26:55.519506] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:57.858 [2024-12-06 15:27:02.575652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18060b0 is same with the state(6) to be set 00:09:57.858 [2024-12-06 15:27:02.575692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18060b0 is same with the state(6) to be set 00:09:57.858 Initializing NVMe Controllers 00:09:57.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:57.858 Initialization complete. Launching workers. 00:09:57.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 8812 00:09:57.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9082, failed to submit 50 00:09:57.858 success 8921, unsuccessful 161, failed 0 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.858 rmmod nvme_tcp 00:09:57.858 rmmod nvme_fabrics 00:09:57.858 rmmod nvme_keyring 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2885102 ']' 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2885102 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2885102 ']' 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2885102 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2885102 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2885102' 00:09:57.858 killing process with pid 2885102 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2885102 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2885102 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.858 15:27:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.252 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.252 00:09:59.252 real 0m32.292s 00:09:59.252 user 0m42.985s 00:09:59.252 sys 0m11.821s 00:09:59.252 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.252 15:27:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.252 ************************************ 00:09:59.252 END TEST nvmf_zcopy 00:09:59.252 ************************************ 00:09:59.252 15:27:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:59.252 15:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.252 15:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.252 15:27:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.252 ************************************ 00:09:59.252 START TEST nvmf_nmic 00:09:59.252 ************************************ 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:59.252 * Looking for test storage... 00:09:59.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:59.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.252 --rc genhtml_branch_coverage=1 00:09:59.252 --rc genhtml_function_coverage=1 00:09:59.252 --rc genhtml_legend=1 00:09:59.252 --rc geninfo_all_blocks=1 00:09:59.252 --rc geninfo_unexecuted_blocks=1 00:09:59.252 00:09:59.252 ' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:59.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.252 --rc genhtml_branch_coverage=1 00:09:59.252 --rc genhtml_function_coverage=1 00:09:59.252 --rc genhtml_legend=1 00:09:59.252 --rc geninfo_all_blocks=1 00:09:59.252 --rc geninfo_unexecuted_blocks=1 00:09:59.252 00:09:59.252 ' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:59.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.252 --rc genhtml_branch_coverage=1 00:09:59.252 --rc genhtml_function_coverage=1 00:09:59.252 --rc genhtml_legend=1 00:09:59.252 --rc geninfo_all_blocks=1 00:09:59.252 --rc geninfo_unexecuted_blocks=1 00:09:59.252 00:09:59.252 ' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:59.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.252 --rc genhtml_branch_coverage=1 00:09:59.252 --rc genhtml_function_coverage=1 00:09:59.252 --rc genhtml_legend=1 00:09:59.252 --rc geninfo_all_blocks=1 00:09:59.252 --rc geninfo_unexecuted_blocks=1 00:09:59.252 00:09:59.252 ' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.252 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.253 15:27:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:05.836 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:05.836 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:05.836 Found net devices under 0000:86:00.0: cvl_0_0 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:05.836 Found net devices under 0000:86:00.1: cvl_0_1 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:05.836 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:05.837 15:27:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:05.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:10:05.837 00:10:05.837 --- 10.0.0.2 ping statistics --- 00:10:05.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.837 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:05.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:05.837 00:10:05.837 --- 10.0.0.1 ping statistics --- 00:10:05.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.837 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2893076 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2893076 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2893076 ']' 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 [2024-12-06 15:27:11.280318] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:10:05.837 [2024-12-06 15:27:11.280379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.837 [2024-12-06 15:27:11.357436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.837 [2024-12-06 15:27:11.398833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.837 [2024-12-06 15:27:11.398871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.837 [2024-12-06 15:27:11.398878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.837 [2024-12-06 15:27:11.398884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.837 [2024-12-06 15:27:11.398890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.837 [2024-12-06 15:27:11.400324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.837 [2024-12-06 15:27:11.400441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.837 [2024-12-06 15:27:11.400475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.837 [2024-12-06 15:27:11.400475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 [2024-12-06 15:27:11.550812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 Malloc0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 [2024-12-06 15:27:11.620580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:05.837 test case1: single bdev can't be used in multiple subsystems 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.837 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.837 [2024-12-06 15:27:11.648448] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:05.837 [2024-12-06 15:27:11.648470] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:05.837 [2024-12-06 15:27:11.648477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.837 request: 00:10:05.837 { 00:10:05.837 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:05.838 "namespace": { 00:10:05.838 "bdev_name": "Malloc0", 00:10:05.838 "no_auto_visible": false, 00:10:05.838 "hide_metadata": false 00:10:05.838 }, 00:10:05.838 "method": "nvmf_subsystem_add_ns", 00:10:05.838 "req_id": 1 00:10:05.838 } 00:10:05.838 Got JSON-RPC error response 00:10:05.838 response: 00:10:05.838 { 00:10:05.838 "code": -32602, 00:10:05.838 "message": "Invalid parameters" 00:10:05.838 } 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:05.838 Adding namespace failed - expected result. 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:05.838 test case2: host connect to nvmf target in multiple paths 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:05.838 [2024-12-06 15:27:11.660603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.838 15:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.773 15:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:08.148 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.148 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:08.148 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.148 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:08.148 15:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:10.080 15:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:10.080 15:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:10.080 15:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.080 15:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:10.080 15:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.080 15:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:10.080 15:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:10.080 [global] 00:10:10.080 thread=1 00:10:10.080 invalidate=1 00:10:10.080 rw=write 00:10:10.080 time_based=1 00:10:10.080 runtime=1 00:10:10.080 ioengine=libaio 00:10:10.080 direct=1 00:10:10.080 bs=4096 00:10:10.080 iodepth=1 00:10:10.080 norandommap=0 00:10:10.080 numjobs=1 00:10:10.080 00:10:10.080 verify_dump=1 00:10:10.080 verify_backlog=512 00:10:10.080 verify_state_save=0 00:10:10.080 do_verify=1 00:10:10.080 verify=crc32c-intel 00:10:10.080 [job0] 00:10:10.080 filename=/dev/nvme0n1 00:10:10.080 Could not set queue depth (nvme0n1) 00:10:10.336 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.336 fio-3.35 00:10:10.336 Starting 1 thread 00:10:11.709 00:10:11.709 job0: (groupid=0, jobs=1): err= 0: pid=2894152: Fri Dec 6 15:27:17 2024 00:10:11.709 read: IOPS=2278, BW=9115KiB/s (9334kB/s)(9124KiB/1001msec) 00:10:11.709 slat (nsec): min=7327, max=45176, avg=8646.35, stdev=1933.07 00:10:11.709 clat (usec): min=165, max=3409, avg=231.08, stdev=70.42 00:10:11.709 lat (usec): min=174, max=3418, avg=239.73, stdev=70.50 00:10:11.709 clat percentiles (usec): 00:10:11.709 | 1.00th=[ 182], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 208], 00:10:11.709 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 241], 00:10:11.709 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:10:11.709 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 392], 99.95th=[ 519], 00:10:11.709 | 99.99th=[ 3425] 00:10:11.709 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:11.709 slat (usec): min=10, max=27239, avg=22.82, stdev=538.13 00:10:11.709 clat (usec): min=108, max=328, avg=148.49, stdev=17.84 00:10:11.709 lat (usec): min=120, max=27453, avg=171.30, stdev=539.73 00:10:11.709 clat percentiles (usec): 00:10:11.709 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 127], 00:10:11.709 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:10:11.709 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 172], 00:10:11.709 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 241], 99.95th=[ 293], 00:10:11.709 | 99.99th=[ 330] 00:10:11.709 bw ( KiB/s): min=11024, max=11024, per=100.00%, avg=11024.00, stdev= 0.00, samples=1 00:10:11.709 iops : min= 2756, max= 2756, avg=2756.00, stdev= 0.00, samples=1 00:10:11.709 lat (usec) : 250=90.35%, 500=9.61%, 750=0.02% 00:10:11.709 lat (msec) : 4=0.02% 00:10:11.709 cpu : usr=3.80%, sys=8.10%, ctx=4844, majf=0, minf=1 00:10:11.709 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.709 issued rwts: total=2281,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.709 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.709 00:10:11.710 Run status group 0 (all jobs): 00:10:11.710 READ: bw=9115KiB/s (9334kB/s), 9115KiB/s-9115KiB/s (9334kB/s-9334kB/s), io=9124KiB (9343kB), run=1001-1001msec 00:10:11.710 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:11.710 00:10:11.710 Disk stats (read/write): 00:10:11.710 nvme0n1: ios=2074/2287, merge=0/0, ticks=1431/322, in_queue=1753, util=98.60% 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.710 rmmod nvme_tcp 00:10:11.710 rmmod nvme_fabrics 00:10:11.710 rmmod nvme_keyring 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2893076 ']' 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2893076 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2893076 ']' 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2893076 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2893076 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2893076' 00:10:11.710 killing process with pid 2893076 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2893076 00:10:11.710 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2893076 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.969 15:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.506 15:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.506 00:10:14.506 real 0m14.901s 00:10:14.506 user 0m33.267s 00:10:14.506 sys 0m5.393s 00:10:14.506 15:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.506 15:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.506 ************************************ 00:10:14.506 END TEST nvmf_nmic 00:10:14.506 ************************************ 00:10:14.506 15:27:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:14.506 15:27:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.506 15:27:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.506 15:27:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.506 ************************************ 00:10:14.506 START TEST nvmf_fio_target 00:10:14.506 ************************************ 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:14.506 * Looking for test storage... 00:10:14.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:14.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.506 --rc genhtml_branch_coverage=1 00:10:14.506 --rc genhtml_function_coverage=1 00:10:14.506 --rc genhtml_legend=1 00:10:14.506 --rc geninfo_all_blocks=1 00:10:14.506 --rc geninfo_unexecuted_blocks=1 00:10:14.506 00:10:14.506 ' 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:14.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.506 --rc genhtml_branch_coverage=1 00:10:14.506 --rc genhtml_function_coverage=1 00:10:14.506 --rc genhtml_legend=1 00:10:14.506 --rc geninfo_all_blocks=1 00:10:14.506 --rc geninfo_unexecuted_blocks=1 00:10:14.506 00:10:14.506 ' 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:14.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.506 --rc genhtml_branch_coverage=1 00:10:14.506 --rc genhtml_function_coverage=1 00:10:14.506 --rc genhtml_legend=1 00:10:14.506 --rc geninfo_all_blocks=1 00:10:14.506 --rc geninfo_unexecuted_blocks=1 00:10:14.506 00:10:14.506 ' 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:14.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.506 --rc genhtml_branch_coverage=1 00:10:14.506 --rc genhtml_function_coverage=1 00:10:14.506 --rc genhtml_legend=1 00:10:14.506 --rc geninfo_all_blocks=1 00:10:14.506 --rc geninfo_unexecuted_blocks=1 00:10:14.506 00:10:14.506 ' 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:14.506 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.507 15:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.079 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:21.079 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:21.080 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:21.080 Found net devices under 0000:86:00.0: cvl_0_0 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:21.080 Found net devices under 0000:86:00.1: cvl_0_1 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.080 15:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:10:21.080 00:10:21.080 --- 10.0.0.2 ping statistics --- 00:10:21.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.080 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:21.080 00:10:21.080 --- 10.0.0.1 ping statistics --- 00:10:21.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.080 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2897924 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2897924 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2897924 ']' 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.080 15:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.080 [2024-12-06 15:27:26.251293] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:10:21.080 [2024-12-06 15:27:26.251337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.080 [2024-12-06 15:27:26.329071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.080 [2024-12-06 15:27:26.368964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.080 [2024-12-06 15:27:26.368999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.080 [2024-12-06 15:27:26.369007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.080 [2024-12-06 15:27:26.369014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.080 [2024-12-06 15:27:26.369019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.080 [2024-12-06 15:27:26.370470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.080 [2024-12-06 15:27:26.370579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.080 [2024-12-06 15:27:26.370685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.080 [2024-12-06 15:27:26.370686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.340 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.340 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:21.340 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.340 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.340 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.340 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.340 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.340 [2024-12-06 15:27:27.303387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.599 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.599 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:21.599 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.857 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:21.857 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.116 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:22.116 15:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.375 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:22.375 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:22.375 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.634 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:22.634 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.893 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:22.893 15:27:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.151 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:23.151 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:23.410 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.669 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:23.669 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.669 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:23.669 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:23.926 15:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.184 [2024-12-06 15:27:30.064952] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.184 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:24.445 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:24.702 15:27:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.647 15:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:25.647 15:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:25.647 15:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.647 15:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:25.647 15:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:25.647 15:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:28.179 15:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:28.179 15:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:28.179 15:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.179 15:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:28.179 15:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.179 15:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:28.179 15:27:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:28.179 [global] 00:10:28.179 thread=1 00:10:28.179 invalidate=1 00:10:28.179 rw=write 00:10:28.179 time_based=1 00:10:28.179 runtime=1 00:10:28.179 ioengine=libaio 00:10:28.179 direct=1 00:10:28.179 bs=4096 00:10:28.179 iodepth=1 00:10:28.179 norandommap=0 00:10:28.179 numjobs=1 00:10:28.179 00:10:28.179 verify_dump=1 00:10:28.179 verify_backlog=512 00:10:28.179 verify_state_save=0 00:10:28.179 do_verify=1 00:10:28.179 verify=crc32c-intel 00:10:28.179 [job0] 00:10:28.179 filename=/dev/nvme0n1 00:10:28.179 [job1] 00:10:28.179 filename=/dev/nvme0n2 00:10:28.179 [job2] 00:10:28.179 filename=/dev/nvme0n3 00:10:28.179 [job3] 00:10:28.179 filename=/dev/nvme0n4 00:10:28.179 Could not set queue depth (nvme0n1) 00:10:28.179 Could not set queue depth (nvme0n2) 00:10:28.179 Could not set queue depth (nvme0n3) 00:10:28.179 Could not set queue depth (nvme0n4) 00:10:28.179 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.179 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.179 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.179 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.179 fio-3.35 00:10:28.179 Starting 4 threads 00:10:29.576 00:10:29.576 job0: (groupid=0, jobs=1): err= 0: pid=2899329: Fri Dec 6 15:27:35 2024 00:10:29.576 read: IOPS=2034, BW=8139KiB/s (8334kB/s)(8196KiB/1007msec) 00:10:29.576 slat (nsec): min=6155, max=30565, avg=7917.85, stdev=1889.08 00:10:29.576 clat (usec): min=187, max=40460, avg=265.52, stdev=888.88 00:10:29.576 lat (usec): min=194, max=40482, avg=273.44, stdev=889.19 00:10:29.576 clat percentiles (usec): 00:10:29.576 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:10:29.576 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:10:29.576 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:10:29.576 | 99.00th=[ 392], 99.50th=[ 400], 99.90th=[ 433], 99.95th=[ 453], 00:10:29.576 | 99.99th=[40633] 00:10:29.576 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:10:29.576 slat (nsec): min=8984, max=43663, avg=11144.24, stdev=2575.42 00:10:29.576 clat (usec): min=119, max=315, avg=158.83, stdev=18.62 00:10:29.576 lat (usec): min=130, max=354, avg=169.98, stdev=19.22 00:10:29.576 clat percentiles (usec): 00:10:29.576 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:29.576 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:10:29.576 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 194], 00:10:29.576 | 99.00th=[ 221], 99.50th=[ 235], 99.90th=[ 277], 99.95th=[ 285], 00:10:29.576 | 99.99th=[ 314] 00:10:29.576 bw ( KiB/s): min= 9224, max=11256, per=32.53%, avg=10240.00, stdev=1436.84, samples=2 00:10:29.576 iops : min= 2306, max= 2814, avg=2560.00, stdev=359.21, samples=2 00:10:29.576 lat (usec) : 250=83.66%, 500=16.32% 00:10:29.576 lat (msec) : 50=0.02% 00:10:29.576 cpu : usr=2.98%, sys=4.17%, ctx=4609, majf=0, minf=1 00:10:29.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.576 issued rwts: total=2049,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.576 job1: (groupid=0, jobs=1): err= 0: pid=2899354: Fri Dec 6 15:27:35 2024 00:10:29.576 read: IOPS=2468, BW=9874KiB/s (10.1MB/s)(9884KiB/1001msec) 00:10:29.576 slat (nsec): min=6206, max=16314, avg=7079.21, stdev=645.65 00:10:29.576 clat (usec): min=172, max=419, avg=228.29, stdev=25.78 00:10:29.576 lat (usec): min=178, max=426, avg=235.37, stdev=25.78 00:10:29.576 clat percentiles (usec): 00:10:29.576 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 200], 00:10:29.576 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 237], 60.00th=[ 243], 00:10:29.576 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 262], 00:10:29.576 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 322], 99.95th=[ 326], 00:10:29.576 | 99.99th=[ 420] 00:10:29.576 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:29.576 slat (nsec): min=9178, max=40711, avg=10153.94, stdev=1398.87 00:10:29.576 clat (usec): min=115, max=348, avg=149.25, stdev=15.26 00:10:29.576 lat (usec): min=125, max=376, avg=159.40, stdev=15.60 00:10:29.576 clat percentiles (usec): 00:10:29.576 | 1.00th=[ 122], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:10:29.576 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:10:29.576 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 172], 00:10:29.576 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 326], 99.95th=[ 334], 00:10:29.576 | 99.99th=[ 351] 00:10:29.576 bw ( KiB/s): min=12288, max=12288, per=39.04%, avg=12288.00, stdev= 0.00, samples=1 00:10:29.576 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:29.576 lat (usec) : 250=88.13%, 500=11.87% 00:10:29.576 cpu : usr=1.90%, sys=5.00%, ctx=5032, majf=0, minf=1 00:10:29.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.576 issued rwts: total=2471,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.576 job2: (groupid=0, jobs=1): err= 0: pid=2899386: Fri Dec 6 15:27:35 2024 00:10:29.576 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:10:29.576 slat (nsec): min=9650, max=22651, avg=16139.83, stdev=4374.03 00:10:29.576 clat (usec): min=400, max=41305, avg=39212.74, stdev=8461.72 00:10:29.576 lat (usec): min=412, max=41315, avg=39228.88, stdev=8462.57 00:10:29.576 clat percentiles (usec): 00:10:29.576 | 1.00th=[ 400], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:29.576 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:29.576 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:29.576 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:29.576 | 99.99th=[41157] 00:10:29.576 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:10:29.576 slat (nsec): min=10332, max=38089, avg=12287.94, stdev=2341.57 00:10:29.576 clat (usec): min=205, max=505, avg=240.68, stdev=14.63 00:10:29.577 lat (usec): min=217, max=523, avg=252.97, stdev=14.70 00:10:29.577 clat percentiles (usec): 00:10:29.577 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:10:29.577 | 30.00th=[ 239], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 241], 00:10:29.577 | 70.00th=[ 243], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 251], 00:10:29.577 | 99.00th=[ 269], 99.50th=[ 314], 99.90th=[ 506], 99.95th=[ 506], 00:10:29.577 | 99.99th=[ 506] 00:10:29.577 bw ( KiB/s): min= 4096, max= 4096, per=13.01%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.577 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.577 lat (usec) : 250=90.84%, 500=4.86%, 750=0.19% 00:10:29.577 lat (msec) : 50=4.11% 00:10:29.577 cpu : usr=0.68%, sys=0.68%, ctx=535, majf=0, minf=1 00:10:29.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.577 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.577 job3: (groupid=0, jobs=1): err= 0: pid=2899397: Fri Dec 6 15:27:35 2024 00:10:29.577 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:29.577 slat (nsec): min=7269, max=41437, avg=8485.09, stdev=1221.49 00:10:29.577 clat (usec): min=198, max=516, avg=250.55, stdev=42.32 00:10:29.577 lat (usec): min=206, max=525, avg=259.04, stdev=42.31 00:10:29.577 clat percentiles (usec): 00:10:29.577 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:10:29.577 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:10:29.577 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 281], 00:10:29.577 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 510], 99.95th=[ 515], 00:10:29.577 | 99.99th=[ 519] 00:10:29.577 write: IOPS=2501, BW=9.77MiB/s (10.2MB/s)(9.78MiB/1001msec); 0 zone resets 00:10:29.577 slat (nsec): min=10840, max=52001, avg=12466.30, stdev=2088.03 00:10:29.577 clat (usec): min=127, max=285, avg=169.17, stdev=25.86 00:10:29.577 lat (usec): min=138, max=304, avg=181.64, stdev=26.36 00:10:29.577 clat percentiles (usec): 00:10:29.577 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:29.577 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:10:29.577 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 202], 95.00th=[ 239], 00:10:29.577 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 269], 99.95th=[ 277], 00:10:29.577 | 99.99th=[ 285] 00:10:29.577 bw ( KiB/s): min= 8912, max= 8912, per=28.32%, avg=8912.00, stdev= 0.00, samples=1 00:10:29.577 iops : min= 2228, max= 2228, avg=2228.00, stdev= 0.00, samples=1 00:10:29.577 lat (usec) : 250=85.43%, 500=14.41%, 750=0.15% 00:10:29.577 cpu : usr=5.10%, sys=6.30%, ctx=4555, majf=0, minf=1 00:10:29.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.577 issued rwts: total=2048,2504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.577 00:10:29.577 Run status group 0 (all jobs): 00:10:29.577 READ: bw=24.9MiB/s (26.1MB/s), 89.0KiB/s-9874KiB/s (91.1kB/s-10.1MB/s), io=25.7MiB (27.0MB), run=1001-1034msec 00:10:29.577 WRITE: bw=30.7MiB/s (32.2MB/s), 1981KiB/s-9.99MiB/s (2028kB/s-10.5MB/s), io=31.8MiB (33.3MB), run=1001-1034msec 00:10:29.577 00:10:29.577 Disk stats (read/write): 00:10:29.577 nvme0n1: ios=1851/2048, merge=0/0, ticks=442/321, in_queue=763, util=82.26% 00:10:29.577 nvme0n2: ios=2039/2048, merge=0/0, ticks=445/292, in_queue=737, util=83.14% 00:10:29.577 nvme0n3: ios=17/512, merge=0/0, ticks=657/118, in_queue=775, util=87.66% 00:10:29.577 nvme0n4: ios=1642/2048, merge=0/0, ticks=1291/319, in_queue=1610, util=97.91% 00:10:29.577 15:27:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:29.577 [global] 00:10:29.577 thread=1 00:10:29.577 invalidate=1 00:10:29.577 rw=randwrite 00:10:29.577 time_based=1 00:10:29.577 runtime=1 00:10:29.577 ioengine=libaio 00:10:29.577 direct=1 00:10:29.577 bs=4096 00:10:29.577 iodepth=1 00:10:29.577 norandommap=0 00:10:29.577 numjobs=1 00:10:29.577 00:10:29.577 verify_dump=1 00:10:29.577 verify_backlog=512 00:10:29.577 verify_state_save=0 00:10:29.577 do_verify=1 00:10:29.577 verify=crc32c-intel 00:10:29.577 [job0] 00:10:29.577 filename=/dev/nvme0n1 00:10:29.577 [job1] 00:10:29.577 filename=/dev/nvme0n2 00:10:29.577 [job2] 00:10:29.577 filename=/dev/nvme0n3 00:10:29.577 [job3] 00:10:29.577 filename=/dev/nvme0n4 00:10:29.577 Could not set queue depth (nvme0n1) 00:10:29.577 Could not set queue depth (nvme0n2) 00:10:29.577 Could not set queue depth (nvme0n3) 00:10:29.577 Could not set queue depth (nvme0n4) 00:10:29.841 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.841 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.841 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.841 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.841 fio-3.35 00:10:29.841 Starting 4 threads 00:10:31.210 00:10:31.210 job0: (groupid=0, jobs=1): err= 0: pid=2899843: Fri Dec 6 15:27:36 2024 00:10:31.210 read: IOPS=392, BW=1570KiB/s (1608kB/s)(1572KiB/1001msec) 00:10:31.210 slat (nsec): min=7482, max=24211, avg=9260.03, stdev=3395.16 00:10:31.210 clat (usec): min=178, max=41009, avg=2295.38, stdev=8961.14 00:10:31.210 lat (usec): min=187, max=41031, avg=2304.64, stdev=8964.00 00:10:31.210 clat percentiles (usec): 00:10:31.210 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:10:31.210 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:10:31.210 | 70.00th=[ 231], 80.00th=[ 247], 90.00th=[ 310], 95.00th=[40633], 00:10:31.210 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.210 | 99.99th=[41157] 00:10:31.210 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:31.210 slat (nsec): min=10300, max=36736, avg=11888.95, stdev=2239.55 00:10:31.210 clat (usec): min=136, max=244, avg=166.42, stdev=22.55 00:10:31.210 lat (usec): min=146, max=270, avg=178.31, stdev=22.73 00:10:31.210 clat percentiles (usec): 00:10:31.210 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:31.210 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:31.210 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 239], 00:10:31.210 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 245], 99.95th=[ 245], 00:10:31.210 | 99.99th=[ 245] 00:10:31.210 bw ( KiB/s): min= 4096, max= 4096, per=16.75%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.210 lat (usec) : 250=91.60%, 500=6.19% 00:10:31.210 lat (msec) : 50=2.21% 00:10:31.210 cpu : usr=1.60%, sys=0.60%, ctx=907, majf=0, minf=1 00:10:31.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.210 issued rwts: total=393,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.210 job1: (groupid=0, jobs=1): err= 0: pid=2899865: Fri Dec 6 15:27:36 2024 00:10:31.210 read: IOPS=2068, BW=8276KiB/s (8474kB/s)(8284KiB/1001msec) 00:10:31.210 slat (nsec): min=7130, max=22232, avg=8219.85, stdev=912.34 00:10:31.210 clat (usec): min=177, max=523, avg=250.89, stdev=35.68 00:10:31.210 lat (usec): min=185, max=532, avg=259.11, stdev=35.78 00:10:31.210 clat percentiles (usec): 00:10:31.210 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:10:31.210 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:10:31.210 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 302], 00:10:31.210 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 519], 99.95th=[ 523], 00:10:31.210 | 99.99th=[ 523] 00:10:31.210 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:31.210 slat (nsec): min=10407, max=45050, avg=11543.18, stdev=1609.97 00:10:31.210 clat (usec): min=112, max=358, avg=164.22, stdev=33.11 00:10:31.210 lat (usec): min=124, max=370, avg=175.76, stdev=33.37 00:10:31.210 clat percentiles (usec): 00:10:31.210 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 139], 00:10:31.210 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 157], 60.00th=[ 165], 00:10:31.210 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 208], 95.00th=[ 231], 00:10:31.210 | 99.00th=[ 265], 99.50th=[ 310], 99.90th=[ 347], 99.95th=[ 347], 00:10:31.210 | 99.99th=[ 359] 00:10:31.210 bw ( KiB/s): min=11000, max=11000, per=44.98%, avg=11000.00, stdev= 0.00, samples=1 00:10:31.210 iops : min= 2750, max= 2750, avg=2750.00, stdev= 0.00, samples=1 00:10:31.210 lat (usec) : 250=82.81%, 500=17.06%, 750=0.13% 00:10:31.210 cpu : usr=4.10%, sys=7.30%, ctx=4632, majf=0, minf=1 00:10:31.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.210 issued rwts: total=2071,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.210 job2: (groupid=0, jobs=1): err= 0: pid=2899875: Fri Dec 6 15:27:36 2024 00:10:31.210 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:10:31.210 slat (nsec): min=10486, max=23812, avg=21572.64, stdev=3212.26 00:10:31.210 clat (usec): min=40853, max=41099, avg=40974.63, stdev=50.13 00:10:31.210 lat (usec): min=40876, max=41112, avg=40996.20, stdev=48.67 00:10:31.210 clat percentiles (usec): 00:10:31.210 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:31.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:31.210 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:31.210 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.210 | 99.99th=[41157] 00:10:31.210 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:31.210 slat (nsec): min=10094, max=49537, avg=12786.52, stdev=2666.38 00:10:31.210 clat (usec): min=142, max=314, avg=184.62, stdev=19.48 00:10:31.210 lat (usec): min=154, max=355, avg=197.41, stdev=20.19 00:10:31.210 clat percentiles (usec): 00:10:31.210 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 169], 00:10:31.210 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:10:31.210 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 219], 00:10:31.210 | 99.00th=[ 235], 99.50th=[ 258], 99.90th=[ 314], 99.95th=[ 314], 00:10:31.210 | 99.99th=[ 314] 00:10:31.210 bw ( KiB/s): min= 4096, max= 4096, per=16.75%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.210 lat (usec) : 250=95.32%, 500=0.56% 00:10:31.210 lat (msec) : 50=4.12% 00:10:31.210 cpu : usr=0.40%, sys=1.10%, ctx=535, majf=0, minf=2 00:10:31.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.210 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.210 job3: (groupid=0, jobs=1): err= 0: pid=2899879: Fri Dec 6 15:27:36 2024 00:10:31.210 read: IOPS=2048, BW=8196KiB/s (8393kB/s)(8204KiB/1001msec) 00:10:31.210 slat (nsec): min=6345, max=23170, avg=7129.17, stdev=808.99 00:10:31.210 clat (usec): min=181, max=520, avg=250.28, stdev=37.37 00:10:31.210 lat (usec): min=188, max=528, avg=257.41, stdev=37.38 00:10:31.210 clat percentiles (usec): 00:10:31.210 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 231], 00:10:31.211 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:31.211 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 285], 00:10:31.211 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 502], 99.95th=[ 506], 00:10:31.211 | 99.99th=[ 523] 00:10:31.211 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:31.211 slat (nsec): min=8998, max=44827, avg=9888.71, stdev=1140.68 00:10:31.211 clat (usec): min=112, max=395, avg=171.17, stdev=38.75 00:10:31.211 lat (usec): min=122, max=405, avg=181.06, stdev=38.86 00:10:31.211 clat percentiles (usec): 00:10:31.211 | 1.00th=[ 126], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:10:31.211 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 161], 60.00th=[ 169], 00:10:31.211 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 221], 95.00th=[ 269], 00:10:31.211 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 371], 99.95th=[ 388], 00:10:31.211 | 99.99th=[ 396] 00:10:31.211 bw ( KiB/s): min=10416, max=10416, per=42.59%, avg=10416.00, stdev= 0.00, samples=1 00:10:31.211 iops : min= 2604, max= 2604, avg=2604.00, stdev= 0.00, samples=1 00:10:31.211 lat (usec) : 250=76.92%, 500=23.01%, 750=0.07% 00:10:31.211 cpu : usr=1.10%, sys=5.30%, ctx=4611, majf=0, minf=1 00:10:31.211 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.211 issued rwts: total=2051,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.211 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.211 00:10:31.211 Run status group 0 (all jobs): 00:10:31.211 READ: bw=17.6MiB/s (18.5MB/s), 87.6KiB/s-8276KiB/s (89.7kB/s-8474kB/s), io=17.7MiB (18.6MB), run=1001-1005msec 00:10:31.211 WRITE: bw=23.9MiB/s (25.0MB/s), 2038KiB/s-9.99MiB/s (2087kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1005msec 00:10:31.211 00:10:31.211 Disk stats (read/write): 00:10:31.211 nvme0n1: ios=53/512, merge=0/0, ticks=1647/77, in_queue=1724, util=99.70% 00:10:31.211 nvme0n2: ios=1861/2048, merge=0/0, ticks=1316/324, in_queue=1640, util=98.56% 00:10:31.211 nvme0n3: ios=17/512, merge=0/0, ticks=697/92, in_queue=789, util=88.90% 00:10:31.211 nvme0n4: ios=1781/2048, merge=0/0, ticks=443/347, in_queue=790, util=89.56% 00:10:31.211 15:27:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:31.211 [global] 00:10:31.211 thread=1 00:10:31.211 invalidate=1 00:10:31.211 rw=write 00:10:31.211 time_based=1 00:10:31.211 runtime=1 00:10:31.211 ioengine=libaio 00:10:31.211 direct=1 00:10:31.211 bs=4096 00:10:31.211 iodepth=128 00:10:31.211 norandommap=0 00:10:31.211 numjobs=1 00:10:31.211 00:10:31.211 verify_dump=1 00:10:31.211 verify_backlog=512 00:10:31.211 verify_state_save=0 00:10:31.211 do_verify=1 00:10:31.211 verify=crc32c-intel 00:10:31.211 [job0] 00:10:31.211 filename=/dev/nvme0n1 00:10:31.211 [job1] 00:10:31.211 filename=/dev/nvme0n2 00:10:31.211 [job2] 00:10:31.211 filename=/dev/nvme0n3 00:10:31.211 [job3] 00:10:31.211 filename=/dev/nvme0n4 00:10:31.211 Could not set queue depth (nvme0n1) 00:10:31.211 Could not set queue depth (nvme0n2) 00:10:31.211 Could not set queue depth (nvme0n3) 00:10:31.211 Could not set queue depth (nvme0n4) 00:10:31.211 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.211 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.211 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.211 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.211 fio-3.35 00:10:31.211 Starting 4 threads 00:10:32.581 00:10:32.581 job0: (groupid=0, jobs=1): err= 0: pid=2900254: Fri Dec 6 15:27:38 2024 00:10:32.581 read: IOPS=4588, BW=17.9MiB/s (18.8MB/s)(18.1MiB/1009msec) 00:10:32.581 slat (nsec): min=1400, max=14073k, avg=108747.78, stdev=818891.28 00:10:32.581 clat (usec): min=4365, max=35534, avg=13571.83, stdev=4426.58 00:10:32.581 lat (usec): min=4371, max=35559, avg=13680.58, stdev=4480.65 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 5866], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[11076], 00:10:32.581 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[12125], 00:10:32.581 | 70.00th=[14222], 80.00th=[16909], 90.00th=[21365], 95.00th=[22152], 00:10:32.581 | 99.00th=[27657], 99.50th=[27919], 99.90th=[35390], 99.95th=[35390], 00:10:32.581 | 99.99th=[35390] 00:10:32.581 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:10:32.581 slat (usec): min=2, max=25798, avg=91.52, stdev=696.15 00:10:32.581 clat (usec): min=2891, max=47378, avg=12682.20, stdev=4821.82 00:10:32.581 lat (usec): min=2901, max=47402, avg=12773.71, stdev=4887.60 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 4146], 5.00th=[ 6128], 10.00th=[ 7898], 20.00th=[10814], 00:10:32.581 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:10:32.581 | 70.00th=[11863], 80.00th=[13960], 90.00th=[20579], 95.00th=[22152], 00:10:32.581 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28443], 99.95th=[38011], 00:10:32.581 | 99.99th=[47449] 00:10:32.581 bw ( KiB/s): min=17128, max=22992, per=24.38%, avg=20060.00, stdev=4146.47, samples=2 00:10:32.581 iops : min= 4282, max= 5748, avg=5015.00, stdev=1036.62, samples=2 00:10:32.581 lat (msec) : 4=0.41%, 10=15.10%, 20=71.49%, 50=13.01% 00:10:32.581 cpu : usr=4.17%, sys=4.86%, ctx=536, majf=0, minf=1 00:10:32.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:32.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.581 issued rwts: total=4630,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.581 job1: (groupid=0, jobs=1): err= 0: pid=2900255: Fri Dec 6 15:27:38 2024 00:10:32.581 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:10:32.581 slat (nsec): min=1485, max=10685k, avg=93039.06, stdev=602689.52 00:10:32.581 clat (usec): min=3877, max=21134, avg=11561.98, stdev=2515.15 00:10:32.581 lat (usec): min=3886, max=21144, avg=11655.02, stdev=2552.16 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 5604], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[10028], 00:10:32.581 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:10:32.581 | 70.00th=[11731], 80.00th=[12387], 90.00th=[15270], 95.00th=[17171], 00:10:32.581 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20841], 99.95th=[21103], 00:10:32.581 | 99.99th=[21103] 00:10:32.581 write: IOPS=5875, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1006msec); 0 zone resets 00:10:32.581 slat (usec): min=2, max=8002, avg=74.29, stdev=301.49 00:10:32.581 clat (usec): min=335, max=21354, avg=10539.47, stdev=2249.49 00:10:32.581 lat (usec): min=392, max=21357, avg=10613.76, stdev=2266.47 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 3523], 5.00th=[ 5866], 10.00th=[ 7570], 20.00th=[ 9503], 00:10:32.581 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 00:10:32.581 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12125], 95.00th=[13698], 00:10:32.581 | 99.00th=[16188], 99.50th=[17695], 99.90th=[21365], 99.95th=[21365], 00:10:32.581 | 99.99th=[21365] 00:10:32.581 bw ( KiB/s): min=21696, max=24576, per=28.11%, avg=23136.00, stdev=2036.47, samples=2 00:10:32.581 iops : min= 5424, max= 6144, avg=5784.00, stdev=509.12, samples=2 00:10:32.581 lat (usec) : 500=0.03%, 750=0.03% 00:10:32.581 lat (msec) : 2=0.02%, 4=0.97%, 10=22.46%, 20=76.06%, 50=0.43% 00:10:32.581 cpu : usr=4.08%, sys=6.17%, ctx=745, majf=0, minf=1 00:10:32.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:32.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.581 issued rwts: total=5632,5911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.581 job2: (groupid=0, jobs=1): err= 0: pid=2900256: Fri Dec 6 15:27:38 2024 00:10:32.581 read: IOPS=4216, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1003msec) 00:10:32.581 slat (nsec): min=1299, max=14285k, avg=113430.09, stdev=695092.25 00:10:32.581 clat (usec): min=2695, max=37480, avg=14508.40, stdev=4049.07 00:10:32.581 lat (usec): min=2698, max=37489, avg=14621.83, stdev=4079.78 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 6849], 5.00th=[10683], 10.00th=[11469], 20.00th=[12256], 00:10:32.581 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:10:32.581 | 70.00th=[14484], 80.00th=[15008], 90.00th=[21103], 95.00th=[22676], 00:10:32.581 | 99.00th=[30540], 99.50th=[30802], 99.90th=[37487], 99.95th=[37487], 00:10:32.581 | 99.99th=[37487] 00:10:32.581 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:32.581 slat (usec): min=2, max=21117, avg=104.18, stdev=651.45 00:10:32.581 clat (usec): min=387, max=37051, avg=14307.75, stdev=4750.76 00:10:32.581 lat (usec): min=485, max=37079, avg=14411.93, stdev=4786.06 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 2671], 5.00th=[ 7308], 10.00th=[10552], 20.00th=[11731], 00:10:32.581 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:10:32.581 | 70.00th=[14353], 80.00th=[17433], 90.00th=[21365], 95.00th=[23200], 00:10:32.581 | 99.00th=[29754], 99.50th=[32375], 99.90th=[34341], 99.95th=[34341], 00:10:32.581 | 99.99th=[36963] 00:10:32.581 bw ( KiB/s): min=16384, max=20480, per=22.40%, avg=18432.00, stdev=2896.31, samples=2 00:10:32.581 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:32.581 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.06% 00:10:32.581 lat (msec) : 2=0.15%, 4=1.13%, 10=4.40%, 20=79.78%, 50=14.42% 00:10:32.581 cpu : usr=2.10%, sys=6.29%, ctx=481, majf=0, minf=2 00:10:32.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:32.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.581 issued rwts: total=4229,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.581 job3: (groupid=0, jobs=1): err= 0: pid=2900257: Fri Dec 6 15:27:38 2024 00:10:32.581 read: IOPS=4948, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1007msec) 00:10:32.581 slat (nsec): min=1301, max=12173k, avg=112856.88, stdev=817212.77 00:10:32.581 clat (usec): min=4522, max=25281, avg=13756.95, stdev=3379.57 00:10:32.581 lat (usec): min=4529, max=25288, avg=13869.80, stdev=3431.96 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 4948], 5.00th=[ 8356], 10.00th=[10683], 20.00th=[11994], 00:10:32.581 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:10:32.581 | 70.00th=[13698], 80.00th=[15270], 90.00th=[19268], 95.00th=[20841], 00:10:32.581 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24511], 99.95th=[24511], 00:10:32.581 | 99.99th=[25297] 00:10:32.581 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:32.581 slat (usec): min=2, max=10609, avg=80.13, stdev=353.16 00:10:32.581 clat (usec): min=1571, max=24538, avg=11513.14, stdev=2626.29 00:10:32.581 lat (usec): min=1584, max=24542, avg=11593.27, stdev=2658.10 00:10:32.581 clat percentiles (usec): 00:10:32.581 | 1.00th=[ 3359], 5.00th=[ 5342], 10.00th=[ 7439], 20.00th=[10683], 00:10:32.581 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12387], 60.00th=[12780], 00:10:32.581 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:10:32.581 | 99.00th=[13829], 99.50th=[14877], 99.90th=[23987], 99.95th=[24249], 00:10:32.581 | 99.99th=[24511] 00:10:32.581 bw ( KiB/s): min=20176, max=20784, per=24.89%, avg=20480.00, stdev=429.92, samples=2 00:10:32.581 iops : min= 5044, max= 5196, avg=5120.00, stdev=107.48, samples=2 00:10:32.581 lat (msec) : 2=0.19%, 4=0.99%, 10=10.40%, 20=84.43%, 50=3.99% 00:10:32.581 cpu : usr=3.88%, sys=4.97%, ctx=664, majf=0, minf=1 00:10:32.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:32.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.581 issued rwts: total=4983,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.581 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.581 00:10:32.581 Run status group 0 (all jobs): 00:10:32.581 READ: bw=75.4MiB/s (79.1MB/s), 16.5MiB/s-21.9MiB/s (17.3MB/s-22.9MB/s), io=76.1MiB (79.8MB), run=1003-1009msec 00:10:32.581 WRITE: bw=80.4MiB/s (84.3MB/s), 17.9MiB/s-23.0MiB/s (18.8MB/s-24.1MB/s), io=81.1MiB (85.0MB), run=1003-1009msec 00:10:32.581 00:10:32.582 Disk stats (read/write): 00:10:32.582 nvme0n1: ios=3860/4096, merge=0/0, ticks=52816/52762, in_queue=105578, util=98.00% 00:10:32.582 nvme0n2: ios=4765/5120, merge=0/0, ticks=41570/40602, in_queue=82172, util=100.00% 00:10:32.582 nvme0n3: ios=3584/3793, merge=0/0, ticks=24740/30157, in_queue=54897, util=89.06% 00:10:32.582 nvme0n4: ios=4153/4458, merge=0/0, ticks=55111/50134, in_queue=105245, util=98.22% 00:10:32.582 15:27:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:32.582 [global] 00:10:32.582 thread=1 00:10:32.582 invalidate=1 00:10:32.582 rw=randwrite 00:10:32.582 time_based=1 00:10:32.582 runtime=1 00:10:32.582 ioengine=libaio 00:10:32.582 direct=1 00:10:32.582 bs=4096 00:10:32.582 iodepth=128 00:10:32.582 norandommap=0 00:10:32.582 numjobs=1 00:10:32.582 00:10:32.582 verify_dump=1 00:10:32.582 verify_backlog=512 00:10:32.582 verify_state_save=0 00:10:32.582 do_verify=1 00:10:32.582 verify=crc32c-intel 00:10:32.582 [job0] 00:10:32.582 filename=/dev/nvme0n1 00:10:32.582 [job1] 00:10:32.582 filename=/dev/nvme0n2 00:10:32.582 [job2] 00:10:32.582 filename=/dev/nvme0n3 00:10:32.582 [job3] 00:10:32.582 filename=/dev/nvme0n4 00:10:32.582 Could not set queue depth (nvme0n1) 00:10:32.582 Could not set queue depth (nvme0n2) 00:10:32.582 Could not set queue depth (nvme0n3) 00:10:32.582 Could not set queue depth (nvme0n4) 00:10:32.838 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.838 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.838 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.838 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.838 fio-3.35 00:10:32.838 Starting 4 threads 00:10:34.209 00:10:34.209 job0: (groupid=0, jobs=1): err= 0: pid=2900623: Fri Dec 6 15:27:39 2024 00:10:34.209 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:34.209 slat (nsec): min=1335, max=10746k, avg=90588.26, stdev=665900.61 00:10:34.209 clat (usec): min=2688, max=27758, avg=11295.95, stdev=2939.89 00:10:34.209 lat (usec): min=2695, max=27774, avg=11386.54, stdev=2994.17 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 4752], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[ 9503], 00:10:34.209 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[10814], 00:10:34.209 | 70.00th=[11731], 80.00th=[13960], 90.00th=[15664], 95.00th=[17171], 00:10:34.209 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21365], 99.95th=[21365], 00:10:34.209 | 99.99th=[27657] 00:10:34.209 write: IOPS=5618, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:34.209 slat (nsec): min=1943, max=15276k, avg=78588.37, stdev=530271.84 00:10:34.209 clat (usec): min=617, max=38281, avg=11295.53, stdev=5740.94 00:10:34.209 lat (usec): min=1566, max=43372, avg=11374.12, stdev=5783.59 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 3326], 5.00th=[ 5604], 10.00th=[ 7504], 20.00th=[ 8717], 00:10:34.209 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:10:34.209 | 70.00th=[11076], 80.00th=[11600], 90.00th=[16581], 95.00th=[25822], 00:10:34.209 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38011], 99.95th=[38536], 00:10:34.209 | 99.99th=[38536] 00:10:34.209 bw ( KiB/s): min=17424, max=27632, per=29.58%, avg=22528.00, stdev=7218.15, samples=2 00:10:34.209 iops : min= 4356, max= 6908, avg=5632.00, stdev=1804.54, samples=2 00:10:34.209 lat (usec) : 750=0.01% 00:10:34.209 lat (msec) : 2=0.03%, 4=1.23%, 10=47.47%, 20=47.49%, 50=3.76% 00:10:34.209 cpu : usr=3.89%, sys=6.48%, ctx=572, majf=0, minf=1 00:10:34.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:34.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.209 issued rwts: total=5632,5635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.209 job1: (groupid=0, jobs=1): err= 0: pid=2900624: Fri Dec 6 15:27:39 2024 00:10:34.209 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:10:34.209 slat (nsec): min=1058, max=11570k, avg=114922.55, stdev=772458.16 00:10:34.209 clat (usec): min=3677, max=50566, avg=13902.35, stdev=5894.77 00:10:34.209 lat (usec): min=3683, max=50572, avg=14017.27, stdev=5958.75 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 5669], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10945], 00:10:34.209 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[13173], 00:10:34.209 | 70.00th=[13698], 80.00th=[15401], 90.00th=[19268], 95.00th=[26870], 00:10:34.209 | 99.00th=[41157], 99.50th=[45876], 99.90th=[50594], 99.95th=[50594], 00:10:34.209 | 99.99th=[50594] 00:10:34.209 write: IOPS=4265, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1006msec); 0 zone resets 00:10:34.209 slat (nsec): min=1998, max=11024k, avg=111762.49, stdev=551333.00 00:10:34.209 clat (usec): min=590, max=50564, avg=16419.95, stdev=9405.93 00:10:34.209 lat (usec): min=614, max=50574, avg=16531.72, stdev=9473.06 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 2147], 5.00th=[ 5342], 10.00th=[ 6652], 20.00th=[ 9503], 00:10:34.209 | 30.00th=[10290], 40.00th=[11469], 50.00th=[11863], 60.00th=[13960], 00:10:34.209 | 70.00th=[22938], 80.00th=[25560], 90.00th=[30278], 95.00th=[34341], 00:10:34.209 | 99.00th=[38536], 99.50th=[39584], 99.90th=[40633], 99.95th=[43779], 00:10:34.209 | 99.99th=[50594] 00:10:34.209 bw ( KiB/s): min=16384, max=16928, per=21.87%, avg=16656.00, stdev=384.67, samples=2 00:10:34.209 iops : min= 4096, max= 4232, avg=4164.00, stdev=96.17, samples=2 00:10:34.209 lat (usec) : 750=0.11%, 1000=0.04% 00:10:34.209 lat (msec) : 2=0.24%, 4=1.30%, 10=16.04%, 20=59.97%, 50=22.22% 00:10:34.209 lat (msec) : 100=0.08% 00:10:34.209 cpu : usr=2.99%, sys=4.88%, ctx=498, majf=0, minf=1 00:10:34.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:34.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.209 issued rwts: total=4096,4291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.209 job2: (groupid=0, jobs=1): err= 0: pid=2900625: Fri Dec 6 15:27:39 2024 00:10:34.209 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:34.209 slat (nsec): min=1476, max=12454k, avg=121478.68, stdev=703645.28 00:10:34.209 clat (usec): min=6661, max=41096, avg=14932.85, stdev=4067.91 00:10:34.209 lat (usec): min=6666, max=41123, avg=15054.33, stdev=4125.20 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 8717], 5.00th=[10814], 10.00th=[11469], 20.00th=[11863], 00:10:34.209 | 30.00th=[12387], 40.00th=[13173], 50.00th=[13435], 60.00th=[14615], 00:10:34.209 | 70.00th=[16450], 80.00th=[17957], 90.00th=[19006], 95.00th=[21627], 00:10:34.209 | 99.00th=[28705], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:34.209 | 99.99th=[41157] 00:10:34.209 write: IOPS=3903, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1006msec); 0 zone resets 00:10:34.209 slat (usec): min=2, max=22676, avg=137.29, stdev=872.82 00:10:34.209 clat (usec): min=4498, max=70981, avg=18279.80, stdev=11608.75 00:10:34.209 lat (usec): min=5213, max=71016, avg=18417.09, stdev=11693.96 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 6718], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11731], 00:10:34.209 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13566], 60.00th=[15008], 00:10:34.209 | 70.00th=[16057], 80.00th=[21365], 90.00th=[36963], 95.00th=[48497], 00:10:34.209 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58459], 99.95th=[70779], 00:10:34.209 | 99.99th=[70779] 00:10:34.209 bw ( KiB/s): min=12272, max=18128, per=19.96%, avg=15200.00, stdev=4140.82, samples=2 00:10:34.209 iops : min= 3068, max= 4532, avg=3800.00, stdev=1035.20, samples=2 00:10:34.209 lat (msec) : 10=4.86%, 20=80.04%, 50=12.57%, 100=2.53% 00:10:34.209 cpu : usr=3.48%, sys=4.78%, ctx=428, majf=0, minf=1 00:10:34.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:34.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.209 issued rwts: total=3584,3927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.209 job3: (groupid=0, jobs=1): err= 0: pid=2900626: Fri Dec 6 15:27:39 2024 00:10:34.209 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:10:34.209 slat (nsec): min=1332, max=10287k, avg=97629.35, stdev=567505.34 00:10:34.209 clat (usec): min=2763, max=28520, avg=12411.14, stdev=2757.71 00:10:34.209 lat (usec): min=2768, max=28529, avg=12508.76, stdev=2798.94 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 6521], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11076], 00:10:34.209 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12518], 00:10:34.209 | 70.00th=[12911], 80.00th=[13435], 90.00th=[15008], 95.00th=[17171], 00:10:34.209 | 99.00th=[21103], 99.50th=[26346], 99.90th=[28443], 99.95th=[28443], 00:10:34.209 | 99.99th=[28443] 00:10:34.209 write: IOPS=5267, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1006msec); 0 zone resets 00:10:34.209 slat (usec): min=2, max=7875, avg=87.39, stdev=514.82 00:10:34.209 clat (usec): min=454, max=31481, avg=12035.61, stdev=2605.05 00:10:34.209 lat (usec): min=2897, max=31485, avg=12123.00, stdev=2633.04 00:10:34.209 clat percentiles (usec): 00:10:34.209 | 1.00th=[ 6390], 5.00th=[ 7767], 10.00th=[10028], 20.00th=[10945], 00:10:34.209 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:10:34.209 | 70.00th=[12518], 80.00th=[13304], 90.00th=[13960], 95.00th=[16581], 00:10:34.209 | 99.00th=[24249], 99.50th=[26084], 99.90th=[26608], 99.95th=[28443], 00:10:34.209 | 99.99th=[31589] 00:10:34.209 bw ( KiB/s): min=18824, max=22544, per=27.16%, avg=20684.00, stdev=2630.44, samples=2 00:10:34.209 iops : min= 4706, max= 5636, avg=5171.00, stdev=657.61, samples=2 00:10:34.209 lat (usec) : 500=0.01% 00:10:34.209 lat (msec) : 4=0.12%, 10=10.13%, 20=86.92%, 50=2.83% 00:10:34.209 cpu : usr=4.78%, sys=6.27%, ctx=486, majf=0, minf=1 00:10:34.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:34.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.209 issued rwts: total=5120,5299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.209 00:10:34.209 Run status group 0 (all jobs): 00:10:34.209 READ: bw=71.6MiB/s (75.0MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-23.0MB/s), io=72.0MiB (75.5MB), run=1003-1006msec 00:10:34.209 WRITE: bw=74.4MiB/s (78.0MB/s), 15.2MiB/s-21.9MiB/s (16.0MB/s-23.0MB/s), io=74.8MiB (78.4MB), run=1003-1006msec 00:10:34.209 00:10:34.209 Disk stats (read/write): 00:10:34.209 nvme0n1: ios=4942/5120, merge=0/0, ticks=49069/46588, in_queue=95657, util=86.67% 00:10:34.209 nvme0n2: ios=3609/3847, merge=0/0, ticks=45818/58589, in_queue=104407, util=99.49% 00:10:34.209 nvme0n3: ios=2808/3072, merge=0/0, ticks=16800/20208, in_queue=37008, util=98.65% 00:10:34.209 nvme0n4: ios=4274/4608, merge=0/0, ticks=26861/26561, in_queue=53422, util=99.79% 00:10:34.209 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:34.209 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2900854 00:10:34.209 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:34.209 15:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:34.209 [global] 00:10:34.209 thread=1 00:10:34.209 invalidate=1 00:10:34.209 rw=read 00:10:34.210 time_based=1 00:10:34.210 runtime=10 00:10:34.210 ioengine=libaio 00:10:34.210 direct=1 00:10:34.210 bs=4096 00:10:34.210 iodepth=1 00:10:34.210 norandommap=1 00:10:34.210 numjobs=1 00:10:34.210 00:10:34.210 [job0] 00:10:34.210 filename=/dev/nvme0n1 00:10:34.210 [job1] 00:10:34.210 filename=/dev/nvme0n2 00:10:34.210 [job2] 00:10:34.210 filename=/dev/nvme0n3 00:10:34.210 [job3] 00:10:34.210 filename=/dev/nvme0n4 00:10:34.210 Could not set queue depth (nvme0n1) 00:10:34.210 Could not set queue depth (nvme0n2) 00:10:34.210 Could not set queue depth (nvme0n3) 00:10:34.210 Could not set queue depth (nvme0n4) 00:10:34.467 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.467 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.467 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.467 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.467 fio-3.35 00:10:34.467 Starting 4 threads 00:10:37.138 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:37.396 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=33906688, buflen=4096 00:10:37.396 fio: pid=2901006, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.396 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:37.654 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=31076352, buflen=4096 00:10:37.655 fio: pid=2901005, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.655 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.655 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:37.655 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.655 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:37.913 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:10:37.913 fio: pid=2901003, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.913 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13168640, buflen=4096 00:10:37.913 fio: pid=2901004, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:37.913 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.914 15:27:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:37.914 00:10:37.914 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2901003: Fri Dec 6 15:27:43 2024 00:10:37.914 read: IOPS=24, BW=97.7KiB/s (100.0kB/s)(308KiB/3154msec) 00:10:37.914 slat (usec): min=12, max=4899, avg=86.25, stdev=552.03 00:10:37.914 clat (usec): min=358, max=42046, avg=40586.38, stdev=4658.19 00:10:37.914 lat (usec): min=399, max=46062, avg=40673.43, stdev=4697.20 00:10:37.914 clat percentiles (usec): 00:10:37.914 | 1.00th=[ 359], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:37.914 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:37.914 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:37.914 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.914 | 99.99th=[42206] 00:10:37.914 bw ( KiB/s): min= 96, max= 104, per=0.43%, avg=98.00, stdev= 3.35, samples=6 00:10:37.914 iops : min= 24, max= 26, avg=24.50, stdev= 0.84, samples=6 00:10:37.914 lat (usec) : 500=1.28% 00:10:37.914 lat (msec) : 50=97.44% 00:10:37.914 cpu : usr=0.13%, sys=0.00%, ctx=80, majf=0, minf=1 00:10:37.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.914 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2901004: Fri Dec 6 15:27:43 2024 00:10:37.914 read: IOPS=964, BW=3856KiB/s (3949kB/s)(12.6MiB/3335msec) 00:10:37.914 slat (usec): min=5, max=8604, avg=17.46, stdev=280.29 00:10:37.914 clat (usec): min=163, max=41992, avg=1011.69, stdev=5670.24 00:10:37.914 lat (usec): min=171, max=42015, avg=1026.48, stdev=5676.59 00:10:37.914 clat percentiles (usec): 00:10:37.914 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:10:37.914 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:10:37.914 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 258], 00:10:37.914 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:37.914 | 99.99th=[42206] 00:10:37.914 bw ( KiB/s): min= 96, max= 8840, per=11.95%, avg=2745.33, stdev=4040.21, samples=6 00:10:37.914 iops : min= 24, max= 2211, avg=686.50, stdev=1010.35, samples=6 00:10:37.914 lat (usec) : 250=93.41%, 500=4.57%, 750=0.03% 00:10:37.914 lat (msec) : 50=1.96% 00:10:37.914 cpu : usr=0.30%, sys=0.84%, ctx=3221, majf=0, minf=2 00:10:37.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 issued rwts: total=3216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.914 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2901005: Fri Dec 6 15:27:43 2024 00:10:37.914 read: IOPS=2601, BW=10.2MiB/s (10.7MB/s)(29.6MiB/2917msec) 00:10:37.914 slat (nsec): min=6088, max=38979, avg=7938.59, stdev=1746.21 00:10:37.914 clat (usec): min=182, max=41213, avg=372.56, stdev=2210.47 00:10:37.914 lat (usec): min=192, max=41225, avg=380.50, stdev=2211.02 00:10:37.914 clat percentiles (usec): 00:10:37.914 | 1.00th=[ 202], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 237], 00:10:37.914 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:10:37.914 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:37.914 | 99.00th=[ 338], 99.50th=[ 445], 99.90th=[41157], 99.95th=[41157], 00:10:37.914 | 99.99th=[41157] 00:10:37.914 bw ( KiB/s): min= 200, max=15512, per=42.30%, avg=9720.00, stdev=6355.00, samples=5 00:10:37.914 iops : min= 50, max= 3878, avg=2430.00, stdev=1588.75, samples=5 00:10:37.914 lat (usec) : 250=50.03%, 500=49.62%, 750=0.04% 00:10:37.914 lat (msec) : 50=0.30% 00:10:37.914 cpu : usr=0.65%, sys=2.54%, ctx=7589, majf=0, minf=2 00:10:37.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 issued rwts: total=7588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.914 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2901006: Fri Dec 6 15:27:43 2024 00:10:37.914 read: IOPS=3052, BW=11.9MiB/s (12.5MB/s)(32.3MiB/2712msec) 00:10:37.914 slat (nsec): min=6475, max=44815, avg=8336.23, stdev=2540.72 00:10:37.914 clat (usec): min=161, max=41052, avg=315.40, stdev=2048.00 00:10:37.914 lat (usec): min=168, max=41062, avg=323.73, stdev=2048.55 00:10:37.914 clat percentiles (usec): 00:10:37.914 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 196], 00:10:37.914 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:10:37.914 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 258], 00:10:37.914 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[41157], 99.95th=[41157], 00:10:37.914 | 99.99th=[41157] 00:10:37.914 bw ( KiB/s): min= 192, max=19072, per=51.46%, avg=11825.60, stdev=7818.87, samples=5 00:10:37.914 iops : min= 48, max= 4768, avg=2956.40, stdev=1954.72, samples=5 00:10:37.914 lat (usec) : 250=93.94%, 500=5.77% 00:10:37.914 lat (msec) : 2=0.01%, 4=0.01%, 50=0.25% 00:10:37.914 cpu : usr=0.89%, sys=2.99%, ctx=8280, majf=0, minf=2 00:10:37.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.914 issued rwts: total=8279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.914 00:10:37.914 Run status group 0 (all jobs): 00:10:37.914 READ: bw=22.4MiB/s (23.5MB/s), 97.7KiB/s-11.9MiB/s (100.0kB/s-12.5MB/s), io=74.8MiB (78.5MB), run=2712-3335msec 00:10:37.914 00:10:37.914 Disk stats (read/write): 00:10:37.914 nvme0n1: ios=110/0, merge=0/0, ticks=3955/0, in_queue=3955, util=98.86% 00:10:37.914 nvme0n2: ios=2315/0, merge=0/0, ticks=4097/0, in_queue=4097, util=99.29% 00:10:37.914 nvme0n3: ios=7479/0, merge=0/0, ticks=3787/0, in_queue=3787, util=99.05% 00:10:37.914 nvme0n4: ios=7922/0, merge=0/0, ticks=2641/0, in_queue=2641, util=98.96% 00:10:38.173 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.173 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:38.431 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.431 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:38.689 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.689 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:38.689 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:38.689 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:38.948 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:38.948 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2900854 00:10:38.948 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:38.948 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.206 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.206 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:39.206 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:39.206 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.206 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:39.206 15:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.206 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:39.206 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:39.206 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:39.206 nvmf hotplug test: fio failed as expected 00:10:39.206 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.463 rmmod nvme_tcp 00:10:39.463 rmmod nvme_fabrics 00:10:39.463 rmmod nvme_keyring 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2897924 ']' 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2897924 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2897924 ']' 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2897924 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897924 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897924' 00:10:39.463 killing process with pid 2897924 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2897924 00:10:39.463 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2897924 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.722 15:27:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.624 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:41.624 00:10:41.624 real 0m27.580s 00:10:41.624 user 1m50.043s 00:10:41.624 sys 0m8.731s 00:10:41.624 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.624 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.624 ************************************ 00:10:41.624 END TEST nvmf_fio_target 00:10:41.624 ************************************ 00:10:41.624 15:27:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:41.624 15:27:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:41.624 15:27:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.624 15:27:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.883 ************************************ 00:10:41.883 START TEST nvmf_bdevio 00:10:41.883 ************************************ 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:41.883 * Looking for test storage... 00:10:41.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:41.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.883 --rc genhtml_branch_coverage=1 00:10:41.883 --rc genhtml_function_coverage=1 00:10:41.883 --rc genhtml_legend=1 00:10:41.883 --rc geninfo_all_blocks=1 00:10:41.883 --rc geninfo_unexecuted_blocks=1 00:10:41.883 00:10:41.883 ' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:41.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.883 --rc genhtml_branch_coverage=1 00:10:41.883 --rc genhtml_function_coverage=1 00:10:41.883 --rc genhtml_legend=1 00:10:41.883 --rc geninfo_all_blocks=1 00:10:41.883 --rc geninfo_unexecuted_blocks=1 00:10:41.883 00:10:41.883 ' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:41.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.883 --rc genhtml_branch_coverage=1 00:10:41.883 --rc genhtml_function_coverage=1 00:10:41.883 --rc genhtml_legend=1 00:10:41.883 --rc geninfo_all_blocks=1 00:10:41.883 --rc geninfo_unexecuted_blocks=1 00:10:41.883 00:10:41.883 ' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:41.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.883 --rc genhtml_branch_coverage=1 00:10:41.883 --rc genhtml_function_coverage=1 00:10:41.883 --rc genhtml_legend=1 00:10:41.883 --rc geninfo_all_blocks=1 00:10:41.883 --rc geninfo_unexecuted_blocks=1 00:10:41.883 00:10:41.883 ' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.883 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:41.884 15:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:48.509 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:48.509 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:48.509 Found net devices under 0000:86:00.0: cvl_0_0 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:48.509 Found net devices under 0000:86:00.1: cvl_0_1 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:10:48.509 00:10:48.509 --- 10.0.0.2 ping statistics --- 00:10:48.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.509 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:10:48.509 00:10:48.509 --- 10.0.0.1 ping statistics --- 00:10:48.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.509 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.509 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2905401 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2905401 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2905401 ']' 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.510 15:27:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 [2024-12-06 15:27:53.917539] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:10:48.510 [2024-12-06 15:27:53.917591] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.510 [2024-12-06 15:27:53.996624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.510 [2024-12-06 15:27:54.038637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.510 [2024-12-06 15:27:54.038674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.510 [2024-12-06 15:27:54.038681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.510 [2024-12-06 15:27:54.038688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.510 [2024-12-06 15:27:54.038693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.510 [2024-12-06 15:27:54.040334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:48.510 [2024-12-06 15:27:54.040444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:48.510 [2024-12-06 15:27:54.040465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:48.510 [2024-12-06 15:27:54.040467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 [2024-12-06 15:27:54.177960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 Malloc0 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.510 [2024-12-06 15:27:54.240080] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:48.510 { 00:10:48.510 "params": { 00:10:48.510 "name": "Nvme$subsystem", 00:10:48.510 "trtype": "$TEST_TRANSPORT", 00:10:48.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.510 "adrfam": "ipv4", 00:10:48.510 "trsvcid": "$NVMF_PORT", 00:10:48.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.510 "hdgst": ${hdgst:-false}, 00:10:48.510 "ddgst": ${ddgst:-false} 00:10:48.510 }, 00:10:48.510 "method": "bdev_nvme_attach_controller" 00:10:48.510 } 00:10:48.510 EOF 00:10:48.510 )") 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:48.510 15:27:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:48.510 "params": { 00:10:48.510 "name": "Nvme1", 00:10:48.510 "trtype": "tcp", 00:10:48.510 "traddr": "10.0.0.2", 00:10:48.510 "adrfam": "ipv4", 00:10:48.510 "trsvcid": "4420", 00:10:48.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.510 "hdgst": false, 00:10:48.510 "ddgst": false 00:10:48.510 }, 00:10:48.510 "method": "bdev_nvme_attach_controller" 00:10:48.510 }' 00:10:48.510 [2024-12-06 15:27:54.290573] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:10:48.510 [2024-12-06 15:27:54.290627] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905497 ] 00:10:48.510 [2024-12-06 15:27:54.363330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:48.510 [2024-12-06 15:27:54.407125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.510 [2024-12-06 15:27:54.407233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.510 [2024-12-06 15:27:54.407234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.767 I/O targets: 00:10:48.767 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:48.767 00:10:48.767 00:10:48.767 CUnit - A unit testing framework for C - Version 2.1-3 00:10:48.767 http://cunit.sourceforge.net/ 00:10:48.767 00:10:48.767 00:10:48.767 Suite: bdevio tests on: Nvme1n1 00:10:48.767 Test: blockdev write read block ...passed 00:10:49.026 Test: blockdev write zeroes read block ...passed 00:10:49.026 Test: blockdev write zeroes read no split ...passed 00:10:49.026 Test: blockdev write zeroes read split ...passed 00:10:49.026 Test: blockdev write zeroes read split partial ...passed 00:10:49.026 Test: blockdev reset ...[2024-12-06 15:27:54.843797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:49.026 [2024-12-06 15:27:54.843867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182ff30 (9): Bad file descriptor 00:10:49.026 [2024-12-06 15:27:54.858127] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:49.026 passed 00:10:49.026 Test: blockdev write read 8 blocks ...passed 00:10:49.026 Test: blockdev write read size > 128k ...passed 00:10:49.026 Test: blockdev write read invalid size ...passed 00:10:49.026 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:49.026 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:49.026 Test: blockdev write read max offset ...passed 00:10:49.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:49.283 Test: blockdev writev readv 8 blocks ...passed 00:10:49.283 Test: blockdev writev readv 30 x 1block ...passed 00:10:49.283 Test: blockdev writev readv block ...passed 00:10:49.283 Test: blockdev writev readv size > 128k ...passed 00:10:49.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:49.283 Test: blockdev comparev and writev ...[2024-12-06 15:27:55.110048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.110095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.110355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.110382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.110611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.110632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.110861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.110885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:49.283 [2024-12-06 15:27:55.110892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:49.283 passed 00:10:49.283 Test: blockdev nvme passthru rw ...passed 00:10:49.283 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:27:55.192721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.283 [2024-12-06 15:27:55.192736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.192840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.283 [2024-12-06 15:27:55.192849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.192947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.283 [2024-12-06 15:27:55.192956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:49.283 [2024-12-06 15:27:55.193057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:49.283 [2024-12-06 15:27:55.193066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:49.283 passed 00:10:49.283 Test: blockdev nvme admin passthru ...passed 00:10:49.283 Test: blockdev copy ...passed 00:10:49.283 00:10:49.283 Run Summary: Type Total Ran Passed Failed Inactive 00:10:49.283 suites 1 1 n/a 0 0 00:10:49.283 tests 23 23 23 0 0 00:10:49.283 asserts 152 152 152 0 n/a 00:10:49.283 00:10:49.283 Elapsed time = 1.115 seconds 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.541 rmmod nvme_tcp 00:10:49.541 rmmod nvme_fabrics 00:10:49.541 rmmod nvme_keyring 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2905401 ']' 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2905401 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2905401 ']' 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2905401 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2905401 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2905401' 00:10:49.541 killing process with pid 2905401 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2905401 00:10:49.541 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2905401 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.801 15:27:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.338 00:10:52.338 real 0m10.117s 00:10:52.338 user 0m10.610s 00:10:52.338 sys 0m5.043s 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 ************************************ 00:10:52.338 END TEST nvmf_bdevio 00:10:52.338 ************************************ 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:52.338 00:10:52.338 real 4m38.917s 00:10:52.338 user 10m37.434s 00:10:52.338 sys 1m40.232s 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 ************************************ 00:10:52.338 END TEST nvmf_target_core 00:10:52.338 ************************************ 00:10:52.338 15:27:57 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:52.338 15:27:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.338 15:27:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.338 15:27:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 ************************************ 00:10:52.338 START TEST nvmf_target_extra 00:10:52.338 ************************************ 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:52.338 * Looking for test storage... 00:10:52.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.338 15:27:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.338 --rc genhtml_branch_coverage=1 00:10:52.338 --rc genhtml_function_coverage=1 00:10:52.338 --rc genhtml_legend=1 00:10:52.338 --rc geninfo_all_blocks=1 00:10:52.338 --rc geninfo_unexecuted_blocks=1 00:10:52.338 00:10:52.338 ' 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.338 --rc genhtml_branch_coverage=1 00:10:52.338 --rc genhtml_function_coverage=1 00:10:52.338 --rc genhtml_legend=1 00:10:52.338 --rc geninfo_all_blocks=1 00:10:52.338 --rc geninfo_unexecuted_blocks=1 00:10:52.338 00:10:52.338 ' 00:10:52.338 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.338 --rc genhtml_branch_coverage=1 00:10:52.338 --rc genhtml_function_coverage=1 00:10:52.338 --rc genhtml_legend=1 00:10:52.339 --rc geninfo_all_blocks=1 00:10:52.339 --rc geninfo_unexecuted_blocks=1 00:10:52.339 00:10:52.339 ' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.339 --rc genhtml_branch_coverage=1 00:10:52.339 --rc genhtml_function_coverage=1 00:10:52.339 --rc genhtml_legend=1 00:10:52.339 --rc geninfo_all_blocks=1 00:10:52.339 --rc geninfo_unexecuted_blocks=1 00:10:52.339 00:10:52.339 ' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.339 ************************************ 00:10:52.339 START TEST nvmf_example 00:10:52.339 ************************************ 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:52.339 * Looking for test storage... 00:10:52.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.339 --rc genhtml_branch_coverage=1 00:10:52.339 --rc genhtml_function_coverage=1 00:10:52.339 --rc genhtml_legend=1 00:10:52.339 --rc geninfo_all_blocks=1 00:10:52.339 --rc geninfo_unexecuted_blocks=1 00:10:52.339 00:10:52.339 ' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.339 --rc genhtml_branch_coverage=1 00:10:52.339 --rc genhtml_function_coverage=1 00:10:52.339 --rc genhtml_legend=1 00:10:52.339 --rc geninfo_all_blocks=1 00:10:52.339 --rc geninfo_unexecuted_blocks=1 00:10:52.339 00:10:52.339 ' 00:10:52.339 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.339 --rc genhtml_branch_coverage=1 00:10:52.339 --rc genhtml_function_coverage=1 00:10:52.339 --rc genhtml_legend=1 00:10:52.339 --rc geninfo_all_blocks=1 00:10:52.339 --rc geninfo_unexecuted_blocks=1 00:10:52.340 00:10:52.340 ' 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.340 --rc genhtml_branch_coverage=1 00:10:52.340 --rc genhtml_function_coverage=1 00:10:52.340 --rc genhtml_legend=1 00:10:52.340 --rc geninfo_all_blocks=1 00:10:52.340 --rc geninfo_unexecuted_blocks=1 00:10:52.340 00:10:52.340 ' 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.340 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.599 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.600 15:27:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.171 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.171 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.171 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.171 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.171 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.171 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.171 15:28:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.171 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:59.172 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:59.172 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:59.172 Found net devices under 0000:86:00.0: cvl_0_0 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:59.172 Found net devices under 0000:86:00.1: cvl_0_1 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:10:59.172 00:10:59.172 --- 10.0.0.2 ping statistics --- 00:10:59.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.172 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:59.172 00:10:59.172 --- 10.0.0.1 ping statistics --- 00:10:59.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.172 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2909321 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2909321 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2909321 ']' 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.172 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.173 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.173 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.173 15:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:59.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:59.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.429 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:59.430 15:28:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:11.611 Initializing NVMe Controllers 00:11:11.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:11.611 Initialization complete. Launching workers. 00:11:11.611 ======================================================== 00:11:11.611 Latency(us) 00:11:11.611 Device Information : IOPS MiB/s Average min max 00:11:11.611 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18844.49 73.61 3395.83 592.73 15505.92 00:11:11.611 ======================================================== 00:11:11.611 Total : 18844.49 73.61 3395.83 592.73 15505.92 00:11:11.611 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.611 rmmod nvme_tcp 00:11:11.611 rmmod nvme_fabrics 00:11:11.611 rmmod nvme_keyring 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2909321 ']' 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2909321 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2909321 ']' 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2909321 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909321 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909321' 00:11:11.611 killing process with pid 2909321 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2909321 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2909321 00:11:11.611 nvmf threads initialize successfully 00:11:11.611 bdev subsystem init successfully 00:11:11.611 created a nvmf target service 00:11:11.611 create targets's poll groups done 00:11:11.611 all subsystems of target started 00:11:11.611 nvmf target is running 00:11:11.611 all subsystems of target stopped 00:11:11.611 destroy targets's poll groups done 00:11:11.611 destroyed the nvmf target service 00:11:11.611 bdev subsystem finish successfully 00:11:11.611 nvmf threads destroy successfully 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.611 15:28:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.179 00:11:12.179 real 0m19.947s 00:11:12.179 user 0m46.531s 00:11:12.179 sys 0m6.085s 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:12.179 ************************************ 00:11:12.179 END TEST nvmf_example 00:11:12.179 ************************************ 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.179 ************************************ 00:11:12.179 START TEST nvmf_filesystem 00:11:12.179 ************************************ 00:11:12.179 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:12.440 * Looking for test storage... 00:11:12.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.440 --rc genhtml_branch_coverage=1 00:11:12.440 --rc genhtml_function_coverage=1 00:11:12.440 --rc genhtml_legend=1 00:11:12.440 --rc geninfo_all_blocks=1 00:11:12.440 --rc geninfo_unexecuted_blocks=1 00:11:12.440 00:11:12.440 ' 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.440 --rc genhtml_branch_coverage=1 00:11:12.440 --rc genhtml_function_coverage=1 00:11:12.440 --rc genhtml_legend=1 00:11:12.440 --rc geninfo_all_blocks=1 00:11:12.440 --rc geninfo_unexecuted_blocks=1 00:11:12.440 00:11:12.440 ' 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.440 --rc genhtml_branch_coverage=1 00:11:12.440 --rc genhtml_function_coverage=1 00:11:12.440 --rc genhtml_legend=1 00:11:12.440 --rc geninfo_all_blocks=1 00:11:12.440 --rc geninfo_unexecuted_blocks=1 00:11:12.440 00:11:12.440 ' 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.440 --rc genhtml_branch_coverage=1 00:11:12.440 --rc genhtml_function_coverage=1 00:11:12.440 --rc genhtml_legend=1 00:11:12.440 --rc geninfo_all_blocks=1 00:11:12.440 --rc geninfo_unexecuted_blocks=1 00:11:12.440 00:11:12.440 ' 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:12.440 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:12.441 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:12.442 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:12.442 #define SPDK_CONFIG_H 00:11:12.442 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:12.442 #define SPDK_CONFIG_APPS 1 00:11:12.442 #define SPDK_CONFIG_ARCH native 00:11:12.442 #undef SPDK_CONFIG_ASAN 00:11:12.442 #undef SPDK_CONFIG_AVAHI 00:11:12.442 #undef SPDK_CONFIG_CET 00:11:12.442 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:12.442 #define SPDK_CONFIG_COVERAGE 1 00:11:12.442 #define SPDK_CONFIG_CROSS_PREFIX 00:11:12.442 #undef SPDK_CONFIG_CRYPTO 00:11:12.442 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:12.442 #undef SPDK_CONFIG_CUSTOMOCF 00:11:12.442 #undef SPDK_CONFIG_DAOS 00:11:12.442 #define SPDK_CONFIG_DAOS_DIR 00:11:12.442 #define SPDK_CONFIG_DEBUG 1 00:11:12.442 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:12.442 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:12.442 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:12.442 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:12.442 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:12.442 #undef SPDK_CONFIG_DPDK_UADK 00:11:12.442 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:12.442 #define SPDK_CONFIG_EXAMPLES 1 00:11:12.442 #undef SPDK_CONFIG_FC 00:11:12.442 #define SPDK_CONFIG_FC_PATH 00:11:12.442 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:12.442 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:12.442 #define SPDK_CONFIG_FSDEV 1 00:11:12.442 #undef SPDK_CONFIG_FUSE 00:11:12.442 #undef SPDK_CONFIG_FUZZER 00:11:12.442 #define SPDK_CONFIG_FUZZER_LIB 00:11:12.442 #undef SPDK_CONFIG_GOLANG 00:11:12.442 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:12.442 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:12.442 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:12.442 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:12.442 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:12.442 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:12.442 #undef SPDK_CONFIG_HAVE_LZ4 00:11:12.442 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:12.442 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:12.442 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:12.442 #define SPDK_CONFIG_IDXD 1 00:11:12.442 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:12.442 #undef SPDK_CONFIG_IPSEC_MB 00:11:12.442 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:12.442 #define SPDK_CONFIG_ISAL 1 00:11:12.442 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:12.442 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:12.442 #define SPDK_CONFIG_LIBDIR 00:11:12.442 #undef SPDK_CONFIG_LTO 00:11:12.442 #define SPDK_CONFIG_MAX_LCORES 128 00:11:12.442 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:12.442 #define SPDK_CONFIG_NVME_CUSE 1 00:11:12.442 #undef SPDK_CONFIG_OCF 00:11:12.442 #define SPDK_CONFIG_OCF_PATH 00:11:12.442 #define SPDK_CONFIG_OPENSSL_PATH 00:11:12.442 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:12.442 #define SPDK_CONFIG_PGO_DIR 00:11:12.442 #undef SPDK_CONFIG_PGO_USE 00:11:12.442 #define SPDK_CONFIG_PREFIX /usr/local 00:11:12.442 #undef SPDK_CONFIG_RAID5F 00:11:12.442 #undef SPDK_CONFIG_RBD 00:11:12.442 #define SPDK_CONFIG_RDMA 1 00:11:12.442 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:12.442 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:12.442 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:12.442 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:12.442 #define SPDK_CONFIG_SHARED 1 00:11:12.442 #undef SPDK_CONFIG_SMA 00:11:12.442 #define SPDK_CONFIG_TESTS 1 00:11:12.442 #undef SPDK_CONFIG_TSAN 00:11:12.442 #define SPDK_CONFIG_UBLK 1 00:11:12.442 #define SPDK_CONFIG_UBSAN 1 00:11:12.442 #undef SPDK_CONFIG_UNIT_TESTS 00:11:12.442 #undef SPDK_CONFIG_URING 00:11:12.442 #define SPDK_CONFIG_URING_PATH 00:11:12.442 #undef SPDK_CONFIG_URING_ZNS 00:11:12.442 #undef SPDK_CONFIG_USDT 00:11:12.442 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:12.442 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:12.442 #define SPDK_CONFIG_VFIO_USER 1 00:11:12.442 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:12.442 #define SPDK_CONFIG_VHOST 1 00:11:12.443 #define SPDK_CONFIG_VIRTIO 1 00:11:12.443 #undef SPDK_CONFIG_VTUNE 00:11:12.443 #define SPDK_CONFIG_VTUNE_DIR 00:11:12.443 #define SPDK_CONFIG_WERROR 1 00:11:12.443 #define SPDK_CONFIG_WPDK_DIR 00:11:12.443 #undef SPDK_CONFIG_XNVME 00:11:12.443 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:12.443 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2911730 ]] 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2911730 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:12.444 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.mtkzK8 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.mtkzK8/tests/target /tmp/spdk.mtkzK8 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189306900480 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963969536 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6657069056 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971953664 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.704 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981476864 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=507904 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:12.705 * Looking for test storage... 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189306900480 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8871661568 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.705 --rc genhtml_branch_coverage=1 00:11:12.705 --rc genhtml_function_coverage=1 00:11:12.705 --rc genhtml_legend=1 00:11:12.705 --rc geninfo_all_blocks=1 00:11:12.705 --rc geninfo_unexecuted_blocks=1 00:11:12.705 00:11:12.705 ' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.705 --rc genhtml_branch_coverage=1 00:11:12.705 --rc genhtml_function_coverage=1 00:11:12.705 --rc genhtml_legend=1 00:11:12.705 --rc geninfo_all_blocks=1 00:11:12.705 --rc geninfo_unexecuted_blocks=1 00:11:12.705 00:11:12.705 ' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.705 --rc genhtml_branch_coverage=1 00:11:12.705 --rc genhtml_function_coverage=1 00:11:12.705 --rc genhtml_legend=1 00:11:12.705 --rc geninfo_all_blocks=1 00:11:12.705 --rc geninfo_unexecuted_blocks=1 00:11:12.705 00:11:12.705 ' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.705 --rc genhtml_branch_coverage=1 00:11:12.705 --rc genhtml_function_coverage=1 00:11:12.705 --rc genhtml_legend=1 00:11:12.705 --rc geninfo_all_blocks=1 00:11:12.705 --rc geninfo_unexecuted_blocks=1 00:11:12.705 00:11:12.705 ' 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.705 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.706 15:28:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:19.277 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:19.277 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:19.277 Found net devices under 0000:86:00.0: cvl_0_0 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:19.277 Found net devices under 0000:86:00.1: cvl_0_1 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.277 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:11:19.278 00:11:19.278 --- 10.0.0.2 ping statistics --- 00:11:19.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.278 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:11:19.278 00:11:19.278 --- 10.0.0.1 ping statistics --- 00:11:19.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.278 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 ************************************ 00:11:19.278 START TEST nvmf_filesystem_no_in_capsule 00:11:19.278 ************************************ 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2914884 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2914884 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2914884 ']' 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 [2024-12-06 15:28:24.701362] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:11:19.278 [2024-12-06 15:28:24.701424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.278 [2024-12-06 15:28:24.779105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.278 [2024-12-06 15:28:24.821706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.278 [2024-12-06 15:28:24.821743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.278 [2024-12-06 15:28:24.821750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.278 [2024-12-06 15:28:24.821756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.278 [2024-12-06 15:28:24.821761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.278 [2024-12-06 15:28:24.823326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.278 [2024-12-06 15:28:24.823360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.278 [2024-12-06 15:28:24.823468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.278 [2024-12-06 15:28:24.823469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 [2024-12-06 15:28:24.960770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.278 15:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 Malloc1 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 [2024-12-06 15:28:25.121275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:19.278 { 00:11:19.278 "name": "Malloc1", 00:11:19.278 "aliases": [ 00:11:19.278 "8b402cbc-5dec-4079-ac98-cf60b222523b" 00:11:19.278 ], 00:11:19.278 "product_name": "Malloc disk", 00:11:19.278 "block_size": 512, 00:11:19.278 "num_blocks": 1048576, 00:11:19.278 "uuid": "8b402cbc-5dec-4079-ac98-cf60b222523b", 00:11:19.278 "assigned_rate_limits": { 00:11:19.278 "rw_ios_per_sec": 0, 00:11:19.278 "rw_mbytes_per_sec": 0, 00:11:19.278 "r_mbytes_per_sec": 0, 00:11:19.278 "w_mbytes_per_sec": 0 00:11:19.278 }, 00:11:19.278 "claimed": true, 00:11:19.278 "claim_type": "exclusive_write", 00:11:19.278 "zoned": false, 00:11:19.278 "supported_io_types": { 00:11:19.278 "read": true, 00:11:19.278 "write": true, 00:11:19.278 "unmap": true, 00:11:19.278 "flush": true, 00:11:19.278 "reset": true, 00:11:19.278 "nvme_admin": false, 00:11:19.278 "nvme_io": false, 00:11:19.278 "nvme_io_md": false, 00:11:19.278 "write_zeroes": true, 00:11:19.278 "zcopy": true, 00:11:19.278 "get_zone_info": false, 00:11:19.278 "zone_management": false, 00:11:19.278 "zone_append": false, 00:11:19.278 "compare": false, 00:11:19.278 "compare_and_write": false, 00:11:19.278 "abort": true, 00:11:19.278 "seek_hole": false, 00:11:19.278 "seek_data": false, 00:11:19.278 "copy": true, 00:11:19.278 "nvme_iov_md": false 00:11:19.278 }, 00:11:19.278 "memory_domains": [ 00:11:19.278 { 00:11:19.278 "dma_device_id": "system", 00:11:19.278 "dma_device_type": 1 00:11:19.278 }, 00:11:19.278 { 00:11:19.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.278 "dma_device_type": 2 00:11:19.278 } 00:11:19.278 ], 00:11:19.278 "driver_specific": {} 00:11:19.278 } 00:11:19.278 ]' 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:19.278 15:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.646 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:20.646 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:20.646 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.646 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:20.646 15:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:22.541 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:22.542 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:23.105 15:28:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:23.105 15:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:24.473 ************************************ 00:11:24.473 START TEST filesystem_ext4 00:11:24.473 ************************************ 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:24.473 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:24.473 mke2fs 1.47.0 (5-Feb-2023) 00:11:24.474 Discarding device blocks: 0/522240 done 00:11:24.474 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:24.474 Filesystem UUID: 53238076-5e85-4e15-be2d-2b806cc42b8a 00:11:24.474 Superblock backups stored on blocks: 00:11:24.474 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:24.474 00:11:24.474 Allocating group tables: 0/64 done 00:11:24.474 Writing inode tables: 0/64 done 00:11:24.474 Creating journal (8192 blocks): done 00:11:24.474 Writing superblocks and filesystem accounting information: 0/64 done 00:11:24.474 00:11:24.474 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:24.474 15:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.038 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.038 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:31.038 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.038 15:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2914884 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.038 00:11:31.038 real 0m5.952s 00:11:31.038 user 0m0.024s 00:11:31.038 sys 0m0.075s 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:31.038 ************************************ 00:11:31.038 END TEST filesystem_ext4 00:11:31.038 ************************************ 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.038 ************************************ 00:11:31.038 START TEST filesystem_btrfs 00:11:31.038 ************************************ 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:31.038 btrfs-progs v6.8.1 00:11:31.038 See https://btrfs.readthedocs.io for more information. 00:11:31.038 00:11:31.038 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:31.038 NOTE: several default settings have changed in version 5.15, please make sure 00:11:31.038 this does not affect your deployments: 00:11:31.038 - DUP for metadata (-m dup) 00:11:31.038 - enabled no-holes (-O no-holes) 00:11:31.038 - enabled free-space-tree (-R free-space-tree) 00:11:31.038 00:11:31.038 Label: (null) 00:11:31.038 UUID: 4cc0d03d-09e8-4384-9eaf-c57821eded09 00:11:31.038 Node size: 16384 00:11:31.038 Sector size: 4096 (CPU page size: 4096) 00:11:31.038 Filesystem size: 510.00MiB 00:11:31.038 Block group profiles: 00:11:31.038 Data: single 8.00MiB 00:11:31.038 Metadata: DUP 32.00MiB 00:11:31.038 System: DUP 8.00MiB 00:11:31.038 SSD detected: yes 00:11:31.038 Zoned device: no 00:11:31.038 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:31.038 Checksum: crc32c 00:11:31.038 Number of devices: 1 00:11:31.038 Devices: 00:11:31.038 ID SIZE PATH 00:11:31.038 1 510.00MiB /dev/nvme0n1p1 00:11:31.038 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:31.038 15:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2914884 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.295 00:11:31.295 real 0m1.155s 00:11:31.295 user 0m0.026s 00:11:31.295 sys 0m0.115s 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.295 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.295 ************************************ 00:11:31.295 END TEST filesystem_btrfs 00:11:31.295 ************************************ 00:11:31.551 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:31.551 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.551 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.551 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.551 ************************************ 00:11:31.551 START TEST filesystem_xfs 00:11:31.551 ************************************ 00:11:31.551 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:31.552 15:28:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:31.552 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:31.552 = sectsz=512 attr=2, projid32bit=1 00:11:31.552 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:31.552 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:31.552 data = bsize=4096 blocks=130560, imaxpct=25 00:11:31.552 = sunit=0 swidth=0 blks 00:11:31.552 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:31.552 log =internal log bsize=4096 blocks=16384, version=2 00:11:31.552 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:31.552 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:32.479 Discarding blocks...Done. 00:11:32.479 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:32.479 15:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2914884 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.369 00:11:34.369 real 0m2.965s 00:11:34.369 user 0m0.033s 00:11:34.369 sys 0m0.066s 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.369 ************************************ 00:11:34.369 END TEST filesystem_xfs 00:11:34.369 ************************************ 00:11:34.369 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:34.635 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:34.635 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2914884 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2914884 ']' 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2914884 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2914884 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2914884' 00:11:34.895 killing process with pid 2914884 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2914884 00:11:34.895 15:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2914884 00:11:35.154 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:35.154 00:11:35.154 real 0m16.482s 00:11:35.154 user 1m4.802s 00:11:35.154 sys 0m1.416s 00:11:35.154 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.154 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.154 ************************************ 00:11:35.154 END TEST nvmf_filesystem_no_in_capsule 00:11:35.154 ************************************ 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.413 ************************************ 00:11:35.413 START TEST nvmf_filesystem_in_capsule 00:11:35.413 ************************************ 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2917760 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2917760 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2917760 ']' 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.413 15:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.413 [2024-12-06 15:28:41.259251] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:11:35.413 [2024-12-06 15:28:41.259298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.413 [2024-12-06 15:28:41.340148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.413 [2024-12-06 15:28:41.384616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.413 [2024-12-06 15:28:41.384652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.413 [2024-12-06 15:28:41.384660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.413 [2024-12-06 15:28:41.384666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.413 [2024-12-06 15:28:41.384671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.413 [2024-12-06 15:28:41.386241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.413 [2024-12-06 15:28:41.386350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.413 [2024-12-06 15:28:41.386459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.413 [2024-12-06 15:28:41.386460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 [2024-12-06 15:28:42.130724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 Malloc1 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 [2024-12-06 15:28:42.293555] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.343 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:36.343 { 00:11:36.343 "name": "Malloc1", 00:11:36.344 "aliases": [ 00:11:36.344 "e4d88b6d-eeca-45dc-bbe6-9d6a81dc371a" 00:11:36.344 ], 00:11:36.344 "product_name": "Malloc disk", 00:11:36.344 "block_size": 512, 00:11:36.344 "num_blocks": 1048576, 00:11:36.344 "uuid": "e4d88b6d-eeca-45dc-bbe6-9d6a81dc371a", 00:11:36.344 "assigned_rate_limits": { 00:11:36.344 "rw_ios_per_sec": 0, 00:11:36.344 "rw_mbytes_per_sec": 0, 00:11:36.344 "r_mbytes_per_sec": 0, 00:11:36.344 "w_mbytes_per_sec": 0 00:11:36.344 }, 00:11:36.344 "claimed": true, 00:11:36.344 "claim_type": "exclusive_write", 00:11:36.344 "zoned": false, 00:11:36.344 "supported_io_types": { 00:11:36.344 "read": true, 00:11:36.344 "write": true, 00:11:36.344 "unmap": true, 00:11:36.344 "flush": true, 00:11:36.344 "reset": true, 00:11:36.344 "nvme_admin": false, 00:11:36.344 "nvme_io": false, 00:11:36.344 "nvme_io_md": false, 00:11:36.344 "write_zeroes": true, 00:11:36.344 "zcopy": true, 00:11:36.344 "get_zone_info": false, 00:11:36.344 "zone_management": false, 00:11:36.344 "zone_append": false, 00:11:36.344 "compare": false, 00:11:36.344 "compare_and_write": false, 00:11:36.344 "abort": true, 00:11:36.344 "seek_hole": false, 00:11:36.344 "seek_data": false, 00:11:36.344 "copy": true, 00:11:36.344 "nvme_iov_md": false 00:11:36.344 }, 00:11:36.344 "memory_domains": [ 00:11:36.344 { 00:11:36.344 "dma_device_id": "system", 00:11:36.344 "dma_device_type": 1 00:11:36.344 }, 00:11:36.344 { 00:11:36.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.344 "dma_device_type": 2 00:11:36.344 } 00:11:36.344 ], 00:11:36.344 "driver_specific": {} 00:11:36.344 } 00:11:36.344 ]' 00:11:36.344 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:36.601 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:36.601 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:36.601 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:36.601 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:36.601 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:36.601 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:36.601 15:28:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.532 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.532 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:37.532 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.532 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:37.532 15:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:40.051 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:40.052 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:40.052 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:40.052 15:28:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:40.052 15:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.421 ************************************ 00:11:41.421 START TEST filesystem_in_capsule_ext4 00:11:41.421 ************************************ 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:41.421 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:41.422 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:41.422 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:41.422 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:41.422 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:41.422 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:41.422 mke2fs 1.47.0 (5-Feb-2023) 00:11:41.422 Discarding device blocks: 0/522240 done 00:11:41.422 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:41.422 Filesystem UUID: bd70fab9-d026-4649-b104-046f6b5c83d3 00:11:41.422 Superblock backups stored on blocks: 00:11:41.422 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:41.422 00:11:41.422 Allocating group tables: 0/64 done 00:11:41.422 Writing inode tables: 0/64 done 00:11:41.678 Creating journal (8192 blocks): done 00:11:41.678 Writing superblocks and filesystem accounting information: 0/64 done 00:11:41.678 00:11:41.678 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:41.678 15:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2917760 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.076 00:11:47.076 real 0m5.784s 00:11:47.076 user 0m0.030s 00:11:47.076 sys 0m0.069s 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:47.076 ************************************ 00:11:47.076 END TEST filesystem_in_capsule_ext4 00:11:47.076 ************************************ 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.076 ************************************ 00:11:47.076 START TEST filesystem_in_capsule_btrfs 00:11:47.076 ************************************ 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:47.076 15:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:47.334 btrfs-progs v6.8.1 00:11:47.334 See https://btrfs.readthedocs.io for more information. 00:11:47.334 00:11:47.334 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:47.334 NOTE: several default settings have changed in version 5.15, please make sure 00:11:47.334 this does not affect your deployments: 00:11:47.334 - DUP for metadata (-m dup) 00:11:47.334 - enabled no-holes (-O no-holes) 00:11:47.334 - enabled free-space-tree (-R free-space-tree) 00:11:47.334 00:11:47.334 Label: (null) 00:11:47.334 UUID: 77da5368-25d4-4c64-afd0-d042a1d1cbd2 00:11:47.334 Node size: 16384 00:11:47.334 Sector size: 4096 (CPU page size: 4096) 00:11:47.334 Filesystem size: 510.00MiB 00:11:47.334 Block group profiles: 00:11:47.334 Data: single 8.00MiB 00:11:47.334 Metadata: DUP 32.00MiB 00:11:47.334 System: DUP 8.00MiB 00:11:47.334 SSD detected: yes 00:11:47.334 Zoned device: no 00:11:47.334 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:47.334 Checksum: crc32c 00:11:47.334 Number of devices: 1 00:11:47.334 Devices: 00:11:47.334 ID SIZE PATH 00:11:47.334 1 510.00MiB /dev/nvme0n1p1 00:11:47.334 00:11:47.334 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:47.334 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.590 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.590 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:47.590 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.590 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:47.590 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:47.590 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.591 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2917760 00:11:47.591 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.591 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.591 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.591 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.591 00:11:47.591 real 0m0.665s 00:11:47.591 user 0m0.028s 00:11:47.591 sys 0m0.108s 00:11:47.591 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.591 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.591 ************************************ 00:11:47.591 END TEST filesystem_in_capsule_btrfs 00:11:47.591 ************************************ 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.848 ************************************ 00:11:47.848 START TEST filesystem_in_capsule_xfs 00:11:47.848 ************************************ 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:47.848 15:28:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:47.848 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:47.848 = sectsz=512 attr=2, projid32bit=1 00:11:47.848 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:47.848 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:47.848 data = bsize=4096 blocks=130560, imaxpct=25 00:11:47.848 = sunit=0 swidth=0 blks 00:11:47.848 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:47.848 log =internal log bsize=4096 blocks=16384, version=2 00:11:47.848 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:47.848 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:48.778 Discarding blocks...Done. 00:11:48.778 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:48.778 15:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.671 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2917760 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.930 00:11:50.930 real 0m3.084s 00:11:50.930 user 0m0.026s 00:11:50.930 sys 0m0.073s 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:50.930 ************************************ 00:11:50.930 END TEST filesystem_in_capsule_xfs 00:11:50.930 ************************************ 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2917760 00:11:50.930 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2917760 ']' 00:11:50.931 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2917760 00:11:50.931 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:50.931 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.931 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2917760 00:11:51.189 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.189 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.189 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2917760' 00:11:51.189 killing process with pid 2917760 00:11:51.189 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2917760 00:11:51.189 15:28:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2917760 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:51.449 00:11:51.449 real 0m16.096s 00:11:51.449 user 1m3.431s 00:11:51.449 sys 0m1.385s 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.449 ************************************ 00:11:51.449 END TEST nvmf_filesystem_in_capsule 00:11:51.449 ************************************ 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:51.449 rmmod nvme_tcp 00:11:51.449 rmmod nvme_fabrics 00:11:51.449 rmmod nvme_keyring 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.449 15:28:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.987 00:11:53.987 real 0m41.328s 00:11:53.987 user 2m10.320s 00:11:53.987 sys 0m7.478s 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.987 ************************************ 00:11:53.987 END TEST nvmf_filesystem 00:11:53.987 ************************************ 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.987 ************************************ 00:11:53.987 START TEST nvmf_target_discovery 00:11:53.987 ************************************ 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:53.987 * Looking for test storage... 00:11:53.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.987 --rc genhtml_branch_coverage=1 00:11:53.987 --rc genhtml_function_coverage=1 00:11:53.987 --rc genhtml_legend=1 00:11:53.987 --rc geninfo_all_blocks=1 00:11:53.987 --rc geninfo_unexecuted_blocks=1 00:11:53.987 00:11:53.987 ' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.987 --rc genhtml_branch_coverage=1 00:11:53.987 --rc genhtml_function_coverage=1 00:11:53.987 --rc genhtml_legend=1 00:11:53.987 --rc geninfo_all_blocks=1 00:11:53.987 --rc geninfo_unexecuted_blocks=1 00:11:53.987 00:11:53.987 ' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.987 --rc genhtml_branch_coverage=1 00:11:53.987 --rc genhtml_function_coverage=1 00:11:53.987 --rc genhtml_legend=1 00:11:53.987 --rc geninfo_all_blocks=1 00:11:53.987 --rc geninfo_unexecuted_blocks=1 00:11:53.987 00:11:53.987 ' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:53.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.987 --rc genhtml_branch_coverage=1 00:11:53.987 --rc genhtml_function_coverage=1 00:11:53.987 --rc genhtml_legend=1 00:11:53.987 --rc geninfo_all_blocks=1 00:11:53.987 --rc geninfo_unexecuted_blocks=1 00:11:53.987 00:11:53.987 ' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.987 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.988 15:28:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:00.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:00.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:00.557 Found net devices under 0000:86:00.0: cvl_0_0 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:00.557 Found net devices under 0000:86:00.1: cvl_0_1 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.557 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:12:00.558 00:12:00.558 --- 10.0.0.2 ping statistics --- 00:12:00.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.558 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:12:00.558 00:12:00.558 --- 10.0.0.1 ping statistics --- 00:12:00.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.558 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2924265 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2924265 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2924265 ']' 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-12-06 15:29:05.775959] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:12:00.558 [2024-12-06 15:29:05.776004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.558 [2024-12-06 15:29:05.852798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.558 [2024-12-06 15:29:05.894714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.558 [2024-12-06 15:29:05.894748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.558 [2024-12-06 15:29:05.894755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.558 [2024-12-06 15:29:05.894761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.558 [2024-12-06 15:29:05.894766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.558 [2024-12-06 15:29:05.896320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.558 [2024-12-06 15:29:05.896444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.558 [2024-12-06 15:29:05.896557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.558 [2024-12-06 15:29:05.896558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.558 15:29:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-12-06 15:29:06.038867] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 Null1 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-12-06 15:29:06.101522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 Null2 00:12:00.558 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 Null3 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 Null4 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:00.559 00:12:00.559 Discovery Log Number of Records 6, Generation counter 6 00:12:00.559 =====Discovery Log Entry 0====== 00:12:00.559 trtype: tcp 00:12:00.559 adrfam: ipv4 00:12:00.559 subtype: current discovery subsystem 00:12:00.559 treq: not required 00:12:00.559 portid: 0 00:12:00.559 trsvcid: 4420 00:12:00.559 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.559 traddr: 10.0.0.2 00:12:00.559 eflags: explicit discovery connections, duplicate discovery information 00:12:00.559 sectype: none 00:12:00.559 =====Discovery Log Entry 1====== 00:12:00.559 trtype: tcp 00:12:00.559 adrfam: ipv4 00:12:00.559 subtype: nvme subsystem 00:12:00.559 treq: not required 00:12:00.559 portid: 0 00:12:00.559 trsvcid: 4420 00:12:00.559 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:00.559 traddr: 10.0.0.2 00:12:00.559 eflags: none 00:12:00.559 sectype: none 00:12:00.559 =====Discovery Log Entry 2====== 00:12:00.559 trtype: tcp 00:12:00.559 adrfam: ipv4 00:12:00.559 subtype: nvme subsystem 00:12:00.559 treq: not required 00:12:00.559 portid: 0 00:12:00.559 trsvcid: 4420 00:12:00.559 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:00.559 traddr: 10.0.0.2 00:12:00.559 eflags: none 00:12:00.559 sectype: none 00:12:00.559 =====Discovery Log Entry 3====== 00:12:00.559 trtype: tcp 00:12:00.559 adrfam: ipv4 00:12:00.559 subtype: nvme subsystem 00:12:00.559 treq: not required 00:12:00.559 portid: 0 00:12:00.559 trsvcid: 4420 00:12:00.559 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:00.559 traddr: 10.0.0.2 00:12:00.559 eflags: none 00:12:00.559 sectype: none 00:12:00.559 =====Discovery Log Entry 4====== 00:12:00.559 trtype: tcp 00:12:00.559 adrfam: ipv4 00:12:00.559 subtype: nvme subsystem 00:12:00.559 treq: not required 00:12:00.559 portid: 0 00:12:00.559 trsvcid: 4420 00:12:00.559 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:00.559 traddr: 10.0.0.2 00:12:00.559 eflags: none 00:12:00.559 sectype: none 00:12:00.559 =====Discovery Log Entry 5====== 00:12:00.559 trtype: tcp 00:12:00.559 adrfam: ipv4 00:12:00.559 subtype: discovery subsystem referral 00:12:00.559 treq: not required 00:12:00.559 portid: 0 00:12:00.559 trsvcid: 4430 00:12:00.559 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.559 traddr: 10.0.0.2 00:12:00.559 eflags: none 00:12:00.559 sectype: none 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:00.559 Perform nvmf subsystem discovery via RPC 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.559 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.559 [ 00:12:00.559 { 00:12:00.559 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:00.559 "subtype": "Discovery", 00:12:00.559 "listen_addresses": [ 00:12:00.559 { 00:12:00.559 "trtype": "TCP", 00:12:00.559 "adrfam": "IPv4", 00:12:00.559 "traddr": "10.0.0.2", 00:12:00.559 "trsvcid": "4420" 00:12:00.559 } 00:12:00.559 ], 00:12:00.559 "allow_any_host": true, 00:12:00.559 "hosts": [] 00:12:00.559 }, 00:12:00.559 { 00:12:00.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.559 "subtype": "NVMe", 00:12:00.559 "listen_addresses": [ 00:12:00.559 { 00:12:00.559 "trtype": "TCP", 00:12:00.559 "adrfam": "IPv4", 00:12:00.559 "traddr": "10.0.0.2", 00:12:00.559 "trsvcid": "4420" 00:12:00.559 } 00:12:00.559 ], 00:12:00.559 "allow_any_host": true, 00:12:00.559 "hosts": [], 00:12:00.559 "serial_number": "SPDK00000000000001", 00:12:00.559 "model_number": "SPDK bdev Controller", 00:12:00.559 "max_namespaces": 32, 00:12:00.559 "min_cntlid": 1, 00:12:00.559 "max_cntlid": 65519, 00:12:00.559 "namespaces": [ 00:12:00.559 { 00:12:00.560 "nsid": 1, 00:12:00.560 "bdev_name": "Null1", 00:12:00.560 "name": "Null1", 00:12:00.560 "nguid": "3DDAB2C66F864736AC8CC02B0A94E023", 00:12:00.560 "uuid": "3ddab2c6-6f86-4736-ac8c-c02b0a94e023" 00:12:00.560 } 00:12:00.560 ] 00:12:00.560 }, 00:12:00.560 { 00:12:00.560 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:00.560 "subtype": "NVMe", 00:12:00.560 "listen_addresses": [ 00:12:00.560 { 00:12:00.560 "trtype": "TCP", 00:12:00.560 "adrfam": "IPv4", 00:12:00.560 "traddr": "10.0.0.2", 00:12:00.560 "trsvcid": "4420" 00:12:00.560 } 00:12:00.560 ], 00:12:00.560 "allow_any_host": true, 00:12:00.560 "hosts": [], 00:12:00.560 "serial_number": "SPDK00000000000002", 00:12:00.560 "model_number": "SPDK bdev Controller", 00:12:00.560 "max_namespaces": 32, 00:12:00.560 "min_cntlid": 1, 00:12:00.560 "max_cntlid": 65519, 00:12:00.560 "namespaces": [ 00:12:00.560 { 00:12:00.560 "nsid": 1, 00:12:00.560 "bdev_name": "Null2", 00:12:00.560 "name": "Null2", 00:12:00.560 "nguid": "43F96C62768D417F9D38B404C1310A65", 00:12:00.560 "uuid": "43f96c62-768d-417f-9d38-b404c1310a65" 00:12:00.560 } 00:12:00.560 ] 00:12:00.560 }, 00:12:00.560 { 00:12:00.560 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:00.560 "subtype": "NVMe", 00:12:00.560 "listen_addresses": [ 00:12:00.560 { 00:12:00.560 "trtype": "TCP", 00:12:00.560 "adrfam": "IPv4", 00:12:00.560 "traddr": "10.0.0.2", 00:12:00.560 "trsvcid": "4420" 00:12:00.560 } 00:12:00.560 ], 00:12:00.560 "allow_any_host": true, 00:12:00.560 "hosts": [], 00:12:00.560 "serial_number": "SPDK00000000000003", 00:12:00.560 "model_number": "SPDK bdev Controller", 00:12:00.560 "max_namespaces": 32, 00:12:00.560 "min_cntlid": 1, 00:12:00.560 "max_cntlid": 65519, 00:12:00.560 "namespaces": [ 00:12:00.560 { 00:12:00.560 "nsid": 1, 00:12:00.560 "bdev_name": "Null3", 00:12:00.560 "name": "Null3", 00:12:00.560 "nguid": "60D76D28AA304061B01557EFBA7092A4", 00:12:00.560 "uuid": "60d76d28-aa30-4061-b015-57efba7092a4" 00:12:00.560 } 00:12:00.560 ] 00:12:00.560 }, 00:12:00.560 { 00:12:00.560 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:00.560 "subtype": "NVMe", 00:12:00.560 "listen_addresses": [ 00:12:00.560 { 00:12:00.560 "trtype": "TCP", 00:12:00.560 "adrfam": "IPv4", 00:12:00.560 "traddr": "10.0.0.2", 00:12:00.560 "trsvcid": "4420" 00:12:00.560 } 00:12:00.560 ], 00:12:00.560 "allow_any_host": true, 00:12:00.560 "hosts": [], 00:12:00.560 "serial_number": "SPDK00000000000004", 00:12:00.560 "model_number": "SPDK bdev Controller", 00:12:00.560 "max_namespaces": 32, 00:12:00.560 "min_cntlid": 1, 00:12:00.560 "max_cntlid": 65519, 00:12:00.560 "namespaces": [ 00:12:00.560 { 00:12:00.560 "nsid": 1, 00:12:00.560 "bdev_name": "Null4", 00:12:00.560 "name": "Null4", 00:12:00.560 "nguid": "FBE92F08115A42A783428C905B3677E7", 00:12:00.560 "uuid": "fbe92f08-115a-42a7-8342-8c905b3677e7" 00:12:00.560 } 00:12:00.560 ] 00:12:00.560 } 00:12:00.560 ] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.560 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.819 rmmod nvme_tcp 00:12:00.819 rmmod nvme_fabrics 00:12:00.819 rmmod nvme_keyring 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2924265 ']' 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2924265 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2924265 ']' 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2924265 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2924265 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2924265' 00:12:00.819 killing process with pid 2924265 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2924265 00:12:00.819 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2924265 00:12:01.077 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.077 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:01.077 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.078 15:29:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.982 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.982 00:12:02.982 real 0m9.360s 00:12:02.982 user 0m5.626s 00:12:02.982 sys 0m4.828s 00:12:02.982 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.982 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:02.982 ************************************ 00:12:02.982 END TEST nvmf_target_discovery 00:12:02.982 ************************************ 00:12:02.982 15:29:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:02.982 15:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.982 15:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.982 15:29:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.242 ************************************ 00:12:03.242 START TEST nvmf_referrals 00:12:03.242 ************************************ 00:12:03.242 15:29:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:03.242 * Looking for test storage... 00:12:03.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.242 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:03.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.243 --rc genhtml_branch_coverage=1 00:12:03.243 --rc genhtml_function_coverage=1 00:12:03.243 --rc genhtml_legend=1 00:12:03.243 --rc geninfo_all_blocks=1 00:12:03.243 --rc geninfo_unexecuted_blocks=1 00:12:03.243 00:12:03.243 ' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:03.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.243 --rc genhtml_branch_coverage=1 00:12:03.243 --rc genhtml_function_coverage=1 00:12:03.243 --rc genhtml_legend=1 00:12:03.243 --rc geninfo_all_blocks=1 00:12:03.243 --rc geninfo_unexecuted_blocks=1 00:12:03.243 00:12:03.243 ' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:03.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.243 --rc genhtml_branch_coverage=1 00:12:03.243 --rc genhtml_function_coverage=1 00:12:03.243 --rc genhtml_legend=1 00:12:03.243 --rc geninfo_all_blocks=1 00:12:03.243 --rc geninfo_unexecuted_blocks=1 00:12:03.243 00:12:03.243 ' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:03.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.243 --rc genhtml_branch_coverage=1 00:12:03.243 --rc genhtml_function_coverage=1 00:12:03.243 --rc genhtml_legend=1 00:12:03.243 --rc geninfo_all_blocks=1 00:12:03.243 --rc geninfo_unexecuted_blocks=1 00:12:03.243 00:12:03.243 ' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.243 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.244 15:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:09.811 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.811 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:09.812 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:09.812 Found net devices under 0000:86:00.0: cvl_0_0 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:09.812 Found net devices under 0000:86:00.1: cvl_0_1 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.812 15:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.429 ms 00:12:09.812 00:12:09.812 --- 10.0.0.2 ping statistics --- 00:12:09.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.812 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:09.812 00:12:09.812 --- 10.0.0.1 ping statistics --- 00:12:09.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.812 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2927968 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2927968 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2927968 ']' 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.812 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.812 [2024-12-06 15:29:15.275026] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:12:09.812 [2024-12-06 15:29:15.275069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.812 [2024-12-06 15:29:15.356117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.812 [2024-12-06 15:29:15.396945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.813 [2024-12-06 15:29:15.396982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.813 [2024-12-06 15:29:15.396990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.813 [2024-12-06 15:29:15.396996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.813 [2024-12-06 15:29:15.397001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.813 [2024-12-06 15:29:15.398575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.813 [2024-12-06 15:29:15.398681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.813 [2024-12-06 15:29:15.398788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.813 [2024-12-06 15:29:15.398789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 [2024-12-06 15:29:15.548987] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 [2024-12-06 15:29:15.573553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.813 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:10.071 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:10.071 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:10.071 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:10.071 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.071 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.071 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.072 15:29:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:10.330 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:10.589 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:10.589 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:10.589 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:10.589 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:10.589 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:10.589 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.589 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.847 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.105 15:29:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.105 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:11.105 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:11.105 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:11.105 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:11.105 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:11.105 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.105 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:11.364 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:11.364 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:11.364 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:11.364 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:11.365 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.365 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.623 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.882 rmmod nvme_tcp 00:12:11.882 rmmod nvme_fabrics 00:12:11.882 rmmod nvme_keyring 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2927968 ']' 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2927968 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2927968 ']' 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2927968 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927968 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927968' 00:12:11.882 killing process with pid 2927968 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2927968 00:12:11.882 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2927968 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.142 15:29:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.047 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.047 00:12:14.047 real 0m11.038s 00:12:14.047 user 0m12.598s 00:12:14.047 sys 0m5.354s 00:12:14.047 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.047 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.048 ************************************ 00:12:14.048 END TEST nvmf_referrals 00:12:14.048 ************************************ 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.308 ************************************ 00:12:14.308 START TEST nvmf_connect_disconnect 00:12:14.308 ************************************ 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:14.308 * Looking for test storage... 00:12:14.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.308 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.309 --rc genhtml_branch_coverage=1 00:12:14.309 --rc genhtml_function_coverage=1 00:12:14.309 --rc genhtml_legend=1 00:12:14.309 --rc geninfo_all_blocks=1 00:12:14.309 --rc geninfo_unexecuted_blocks=1 00:12:14.309 00:12:14.309 ' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.309 --rc genhtml_branch_coverage=1 00:12:14.309 --rc genhtml_function_coverage=1 00:12:14.309 --rc genhtml_legend=1 00:12:14.309 --rc geninfo_all_blocks=1 00:12:14.309 --rc geninfo_unexecuted_blocks=1 00:12:14.309 00:12:14.309 ' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.309 --rc genhtml_branch_coverage=1 00:12:14.309 --rc genhtml_function_coverage=1 00:12:14.309 --rc genhtml_legend=1 00:12:14.309 --rc geninfo_all_blocks=1 00:12:14.309 --rc geninfo_unexecuted_blocks=1 00:12:14.309 00:12:14.309 ' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.309 --rc genhtml_branch_coverage=1 00:12:14.309 --rc genhtml_function_coverage=1 00:12:14.309 --rc genhtml_legend=1 00:12:14.309 --rc geninfo_all_blocks=1 00:12:14.309 --rc geninfo_unexecuted_blocks=1 00:12:14.309 00:12:14.309 ' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.309 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.569 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.569 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.569 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.569 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.569 15:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.138 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.138 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.138 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:21.139 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:21.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:21.139 Found net devices under 0000:86:00.0: cvl_0_0 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:21.139 Found net devices under 0000:86:00.1: cvl_0_1 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.139 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.140 15:29:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:12:21.140 00:12:21.140 --- 10.0.0.2 ping statistics --- 00:12:21.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.140 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:12:21.140 00:12:21.140 --- 10.0.0.1 ping statistics --- 00:12:21.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.140 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2931923 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2931923 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2931923 ']' 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.140 [2024-12-06 15:29:26.314876] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:12:21.140 [2024-12-06 15:29:26.314927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.140 [2024-12-06 15:29:26.394589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.140 [2024-12-06 15:29:26.435297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.140 [2024-12-06 15:29:26.435336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.140 [2024-12-06 15:29:26.435344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.140 [2024-12-06 15:29:26.435350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.140 [2024-12-06 15:29:26.435356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.140 [2024-12-06 15:29:26.436863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.140 [2024-12-06 15:29:26.436972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.140 [2024-12-06 15:29:26.437078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.140 [2024-12-06 15:29:26.437079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.140 [2024-12-06 15:29:26.579433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.140 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:21.141 [2024-12-06 15:29:26.644052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:21.141 15:29:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:24.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.553 rmmod nvme_tcp 00:12:37.553 rmmod nvme_fabrics 00:12:37.553 rmmod nvme_keyring 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2931923 ']' 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2931923 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2931923 ']' 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2931923 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.553 15:29:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2931923 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2931923' 00:12:37.553 killing process with pid 2931923 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2931923 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2931923 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.553 15:29:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.457 00:12:39.457 real 0m25.180s 00:12:39.457 user 1m8.216s 00:12:39.457 sys 0m5.883s 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:39.457 ************************************ 00:12:39.457 END TEST nvmf_connect_disconnect 00:12:39.457 ************************************ 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.457 ************************************ 00:12:39.457 START TEST nvmf_multitarget 00:12:39.457 ************************************ 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:39.457 * Looking for test storage... 00:12:39.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:39.457 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:39.715 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:39.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.716 --rc genhtml_branch_coverage=1 00:12:39.716 --rc genhtml_function_coverage=1 00:12:39.716 --rc genhtml_legend=1 00:12:39.716 --rc geninfo_all_blocks=1 00:12:39.716 --rc geninfo_unexecuted_blocks=1 00:12:39.716 00:12:39.716 ' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:39.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.716 --rc genhtml_branch_coverage=1 00:12:39.716 --rc genhtml_function_coverage=1 00:12:39.716 --rc genhtml_legend=1 00:12:39.716 --rc geninfo_all_blocks=1 00:12:39.716 --rc geninfo_unexecuted_blocks=1 00:12:39.716 00:12:39.716 ' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:39.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.716 --rc genhtml_branch_coverage=1 00:12:39.716 --rc genhtml_function_coverage=1 00:12:39.716 --rc genhtml_legend=1 00:12:39.716 --rc geninfo_all_blocks=1 00:12:39.716 --rc geninfo_unexecuted_blocks=1 00:12:39.716 00:12:39.716 ' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:39.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.716 --rc genhtml_branch_coverage=1 00:12:39.716 --rc genhtml_function_coverage=1 00:12:39.716 --rc genhtml_legend=1 00:12:39.716 --rc geninfo_all_blocks=1 00:12:39.716 --rc geninfo_unexecuted_blocks=1 00:12:39.716 00:12:39.716 ' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.716 15:29:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.282 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:46.283 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:46.283 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:46.283 Found net devices under 0000:86:00.0: cvl_0_0 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:46.283 Found net devices under 0000:86:00.1: cvl_0_1 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:12:46.283 00:12:46.283 --- 10.0.0.2 ping statistics --- 00:12:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.283 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:12:46.283 00:12:46.283 --- 10.0.0.1 ping statistics --- 00:12:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.283 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2938313 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2938313 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2938313 ']' 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.283 15:29:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.283 [2024-12-06 15:29:51.591756] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:12:46.284 [2024-12-06 15:29:51.591806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.284 [2024-12-06 15:29:51.670986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.284 [2024-12-06 15:29:51.713319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.284 [2024-12-06 15:29:51.713354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.284 [2024-12-06 15:29:51.713361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.284 [2024-12-06 15:29:51.713372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.284 [2024-12-06 15:29:51.713379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.284 [2024-12-06 15:29:51.714805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.284 [2024-12-06 15:29:51.714913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.284 [2024-12-06 15:29:51.715018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.284 [2024-12-06 15:29:51.715019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.542 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:46.800 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:46.800 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:46.800 "nvmf_tgt_1" 00:12:46.800 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:46.800 "nvmf_tgt_2" 00:12:46.800 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:46.800 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:47.057 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:47.057 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:47.057 true 00:12:47.057 15:29:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:47.315 true 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.315 rmmod nvme_tcp 00:12:47.315 rmmod nvme_fabrics 00:12:47.315 rmmod nvme_keyring 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2938313 ']' 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2938313 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2938313 ']' 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2938313 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938313 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.315 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938313' 00:12:47.315 killing process with pid 2938313 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2938313 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2938313 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.574 15:29:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.110 00:12:50.110 real 0m10.194s 00:12:50.110 user 0m9.663s 00:12:50.110 sys 0m4.941s 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:50.110 ************************************ 00:12:50.110 END TEST nvmf_multitarget 00:12:50.110 ************************************ 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.110 ************************************ 00:12:50.110 START TEST nvmf_rpc 00:12:50.110 ************************************ 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:50.110 * Looking for test storage... 00:12:50.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.110 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.111 --rc genhtml_branch_coverage=1 00:12:50.111 --rc genhtml_function_coverage=1 00:12:50.111 --rc genhtml_legend=1 00:12:50.111 --rc geninfo_all_blocks=1 00:12:50.111 --rc geninfo_unexecuted_blocks=1 00:12:50.111 00:12:50.111 ' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.111 --rc genhtml_branch_coverage=1 00:12:50.111 --rc genhtml_function_coverage=1 00:12:50.111 --rc genhtml_legend=1 00:12:50.111 --rc geninfo_all_blocks=1 00:12:50.111 --rc geninfo_unexecuted_blocks=1 00:12:50.111 00:12:50.111 ' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.111 --rc genhtml_branch_coverage=1 00:12:50.111 --rc genhtml_function_coverage=1 00:12:50.111 --rc genhtml_legend=1 00:12:50.111 --rc geninfo_all_blocks=1 00:12:50.111 --rc geninfo_unexecuted_blocks=1 00:12:50.111 00:12:50.111 ' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.111 --rc genhtml_branch_coverage=1 00:12:50.111 --rc genhtml_function_coverage=1 00:12:50.111 --rc genhtml_legend=1 00:12:50.111 --rc geninfo_all_blocks=1 00:12:50.111 --rc geninfo_unexecuted_blocks=1 00:12:50.111 00:12:50.111 ' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.111 15:29:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:56.827 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:56.827 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:56.827 Found net devices under 0000:86:00.0: cvl_0_0 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:56.827 Found net devices under 0000:86:00.1: cvl_0_1 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:56.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:12:56.827 00:12:56.827 --- 10.0.0.2 ping statistics --- 00:12:56.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.827 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:12:56.827 00:12:56.827 --- 10.0.0.1 ping statistics --- 00:12:56.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.827 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.827 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2942332 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2942332 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2942332 ']' 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.828 15:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.828 [2024-12-06 15:30:01.904740] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:12:56.828 [2024-12-06 15:30:01.904789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.828 [2024-12-06 15:30:01.985030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.828 [2024-12-06 15:30:02.032192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.828 [2024-12-06 15:30:02.032227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.828 [2024-12-06 15:30:02.032234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.828 [2024-12-06 15:30:02.032241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.828 [2024-12-06 15:30:02.032246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.828 [2024-12-06 15:30:02.033862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.828 [2024-12-06 15:30:02.033969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.828 [2024-12-06 15:30:02.033993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.828 [2024-12-06 15:30:02.033994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:56.828 "tick_rate": 2100000000, 00:12:56.828 "poll_groups": [ 00:12:56.828 { 00:12:56.828 "name": "nvmf_tgt_poll_group_000", 00:12:56.828 "admin_qpairs": 0, 00:12:56.828 "io_qpairs": 0, 00:12:56.828 "current_admin_qpairs": 0, 00:12:56.828 "current_io_qpairs": 0, 00:12:56.828 "pending_bdev_io": 0, 00:12:56.828 "completed_nvme_io": 0, 00:12:56.828 "transports": [] 00:12:56.828 }, 00:12:56.828 { 00:12:56.828 "name": "nvmf_tgt_poll_group_001", 00:12:56.828 "admin_qpairs": 0, 00:12:56.828 "io_qpairs": 0, 00:12:56.828 "current_admin_qpairs": 0, 00:12:56.828 "current_io_qpairs": 0, 00:12:56.828 "pending_bdev_io": 0, 00:12:56.828 "completed_nvme_io": 0, 00:12:56.828 "transports": [] 00:12:56.828 }, 00:12:56.828 { 00:12:56.828 "name": "nvmf_tgt_poll_group_002", 00:12:56.828 "admin_qpairs": 0, 00:12:56.828 "io_qpairs": 0, 00:12:56.828 "current_admin_qpairs": 0, 00:12:56.828 "current_io_qpairs": 0, 00:12:56.828 "pending_bdev_io": 0, 00:12:56.828 "completed_nvme_io": 0, 00:12:56.828 "transports": [] 00:12:56.828 }, 00:12:56.828 { 00:12:56.828 "name": "nvmf_tgt_poll_group_003", 00:12:56.828 "admin_qpairs": 0, 00:12:56.828 "io_qpairs": 0, 00:12:56.828 "current_admin_qpairs": 0, 00:12:56.828 "current_io_qpairs": 0, 00:12:56.828 "pending_bdev_io": 0, 00:12:56.828 "completed_nvme_io": 0, 00:12:56.828 "transports": [] 00:12:56.828 } 00:12:56.828 ] 00:12:56.828 }' 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:56.828 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.086 [2024-12-06 15:30:02.898837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.086 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:57.086 "tick_rate": 2100000000, 00:12:57.086 "poll_groups": [ 00:12:57.086 { 00:12:57.086 "name": "nvmf_tgt_poll_group_000", 00:12:57.086 "admin_qpairs": 0, 00:12:57.086 "io_qpairs": 0, 00:12:57.086 "current_admin_qpairs": 0, 00:12:57.086 "current_io_qpairs": 0, 00:12:57.086 "pending_bdev_io": 0, 00:12:57.086 "completed_nvme_io": 0, 00:12:57.086 "transports": [ 00:12:57.086 { 00:12:57.086 "trtype": "TCP" 00:12:57.086 } 00:12:57.086 ] 00:12:57.086 }, 00:12:57.086 { 00:12:57.086 "name": "nvmf_tgt_poll_group_001", 00:12:57.086 "admin_qpairs": 0, 00:12:57.086 "io_qpairs": 0, 00:12:57.086 "current_admin_qpairs": 0, 00:12:57.086 "current_io_qpairs": 0, 00:12:57.086 "pending_bdev_io": 0, 00:12:57.086 "completed_nvme_io": 0, 00:12:57.086 "transports": [ 00:12:57.086 { 00:12:57.086 "trtype": "TCP" 00:12:57.086 } 00:12:57.086 ] 00:12:57.086 }, 00:12:57.086 { 00:12:57.086 "name": "nvmf_tgt_poll_group_002", 00:12:57.086 "admin_qpairs": 0, 00:12:57.086 "io_qpairs": 0, 00:12:57.086 "current_admin_qpairs": 0, 00:12:57.087 "current_io_qpairs": 0, 00:12:57.087 "pending_bdev_io": 0, 00:12:57.087 "completed_nvme_io": 0, 00:12:57.087 "transports": [ 00:12:57.087 { 00:12:57.087 "trtype": "TCP" 00:12:57.087 } 00:12:57.087 ] 00:12:57.087 }, 00:12:57.087 { 00:12:57.087 "name": "nvmf_tgt_poll_group_003", 00:12:57.087 "admin_qpairs": 0, 00:12:57.087 "io_qpairs": 0, 00:12:57.087 "current_admin_qpairs": 0, 00:12:57.087 "current_io_qpairs": 0, 00:12:57.087 "pending_bdev_io": 0, 00:12:57.087 "completed_nvme_io": 0, 00:12:57.087 "transports": [ 00:12:57.087 { 00:12:57.087 "trtype": "TCP" 00:12:57.087 } 00:12:57.087 ] 00:12:57.087 } 00:12:57.087 ] 00:12:57.087 }' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.087 15:30:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.087 Malloc1 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.087 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.344 [2024-12-06 15:30:03.092834] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.344 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:57.345 [2024-12-06 15:30:03.127415] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:57.345 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:57.345 could not add new controller: failed to write to nvme-fabrics device 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.345 15:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.276 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.277 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.277 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.277 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:58.277 15:30:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:00.797 [2024-12-06 15:30:06.433950] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:00.797 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:00.797 could not add new controller: failed to write to nvme-fabrics device 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.797 15:30:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.728 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.728 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:01.728 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.728 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:01.728 15:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.257 [2024-12-06 15:30:09.787157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.257 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.258 15:30:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.205 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.205 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:05.205 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.205 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:05.205 15:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:07.103 15:30:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.103 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:07.103 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.104 [2024-12-06 15:30:13.050848] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.104 15:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.477 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.477 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.477 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.477 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.477 15:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.382 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.382 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.382 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.382 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.382 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.382 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:10.383 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.383 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.383 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.644 [2024-12-06 15:30:16.437074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.644 15:30:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.576 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.576 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.576 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.576 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.576 15:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.105 [2024-12-06 15:30:19.735388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.105 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.106 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.106 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.106 15:30:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.038 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.038 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:15.038 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.038 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:15.038 15:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:16.934 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 [2024-12-06 15:30:22.990490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 15:30:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.191 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.191 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.191 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 15:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.138 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.138 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:18.138 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.138 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:18.138 15:30:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:20.663 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 [2024-12-06 15:30:26.301605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 [2024-12-06 15:30:26.349717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 [2024-12-06 15:30:26.397844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 [2024-12-06 15:30:26.446008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.664 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 [2024-12-06 15:30:26.494158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:20.665 "tick_rate": 2100000000, 00:13:20.665 "poll_groups": [ 00:13:20.665 { 00:13:20.665 "name": "nvmf_tgt_poll_group_000", 00:13:20.665 "admin_qpairs": 2, 00:13:20.665 "io_qpairs": 168, 00:13:20.665 "current_admin_qpairs": 0, 00:13:20.665 "current_io_qpairs": 0, 00:13:20.665 "pending_bdev_io": 0, 00:13:20.665 "completed_nvme_io": 267, 00:13:20.665 "transports": [ 00:13:20.665 { 00:13:20.665 "trtype": "TCP" 00:13:20.665 } 00:13:20.665 ] 00:13:20.665 }, 00:13:20.665 { 00:13:20.665 "name": "nvmf_tgt_poll_group_001", 00:13:20.665 "admin_qpairs": 2, 00:13:20.665 "io_qpairs": 168, 00:13:20.665 "current_admin_qpairs": 0, 00:13:20.665 "current_io_qpairs": 0, 00:13:20.665 "pending_bdev_io": 0, 00:13:20.665 "completed_nvme_io": 236, 00:13:20.665 "transports": [ 00:13:20.665 { 00:13:20.665 "trtype": "TCP" 00:13:20.665 } 00:13:20.665 ] 00:13:20.665 }, 00:13:20.665 { 00:13:20.665 "name": "nvmf_tgt_poll_group_002", 00:13:20.665 "admin_qpairs": 1, 00:13:20.665 "io_qpairs": 168, 00:13:20.665 "current_admin_qpairs": 0, 00:13:20.665 "current_io_qpairs": 0, 00:13:20.665 "pending_bdev_io": 0, 00:13:20.665 "completed_nvme_io": 298, 00:13:20.665 "transports": [ 00:13:20.665 { 00:13:20.665 "trtype": "TCP" 00:13:20.665 } 00:13:20.665 ] 00:13:20.665 }, 00:13:20.665 { 00:13:20.665 "name": "nvmf_tgt_poll_group_003", 00:13:20.665 "admin_qpairs": 2, 00:13:20.665 "io_qpairs": 168, 00:13:20.665 "current_admin_qpairs": 0, 00:13:20.665 "current_io_qpairs": 0, 00:13:20.665 "pending_bdev_io": 0, 00:13:20.665 "completed_nvme_io": 221, 00:13:20.665 "transports": [ 00:13:20.665 { 00:13:20.665 "trtype": "TCP" 00:13:20.665 } 00:13:20.665 ] 00:13:20.665 } 00:13:20.665 ] 00:13:20.665 }' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.665 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.665 rmmod nvme_tcp 00:13:20.924 rmmod nvme_fabrics 00:13:20.924 rmmod nvme_keyring 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2942332 ']' 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2942332 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2942332 ']' 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2942332 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942332 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942332' 00:13:20.924 killing process with pid 2942332 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2942332 00:13:20.924 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2942332 00:13:21.183 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:21.183 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:21.183 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:21.183 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:21.183 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:21.183 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:21.184 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:21.184 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:21.184 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:21.184 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.184 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.184 15:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.088 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.088 00:13:23.088 real 0m33.386s 00:13:23.088 user 1m41.072s 00:13:23.088 sys 0m6.632s 00:13:23.088 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.088 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.088 ************************************ 00:13:23.088 END TEST nvmf_rpc 00:13:23.088 ************************************ 00:13:23.088 15:30:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:23.088 15:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.088 15:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.088 15:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.347 ************************************ 00:13:23.347 START TEST nvmf_invalid 00:13:23.347 ************************************ 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:23.347 * Looking for test storage... 00:13:23.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.347 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:23.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.348 --rc genhtml_branch_coverage=1 00:13:23.348 --rc genhtml_function_coverage=1 00:13:23.348 --rc genhtml_legend=1 00:13:23.348 --rc geninfo_all_blocks=1 00:13:23.348 --rc geninfo_unexecuted_blocks=1 00:13:23.348 00:13:23.348 ' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:23.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.348 --rc genhtml_branch_coverage=1 00:13:23.348 --rc genhtml_function_coverage=1 00:13:23.348 --rc genhtml_legend=1 00:13:23.348 --rc geninfo_all_blocks=1 00:13:23.348 --rc geninfo_unexecuted_blocks=1 00:13:23.348 00:13:23.348 ' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:23.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.348 --rc genhtml_branch_coverage=1 00:13:23.348 --rc genhtml_function_coverage=1 00:13:23.348 --rc genhtml_legend=1 00:13:23.348 --rc geninfo_all_blocks=1 00:13:23.348 --rc geninfo_unexecuted_blocks=1 00:13:23.348 00:13:23.348 ' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:23.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.348 --rc genhtml_branch_coverage=1 00:13:23.348 --rc genhtml_function_coverage=1 00:13:23.348 --rc genhtml_legend=1 00:13:23.348 --rc geninfo_all_blocks=1 00:13:23.348 --rc geninfo_unexecuted_blocks=1 00:13:23.348 00:13:23.348 ' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:23.348 15:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:29.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:29.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:29.915 Found net devices under 0000:86:00.0: cvl_0_0 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:29.915 Found net devices under 0000:86:00.1: cvl_0_1 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.915 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.916 15:30:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:13:29.916 00:13:29.916 --- 10.0.0.2 ping statistics --- 00:13:29.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.916 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:13:29.916 00:13:29.916 --- 10.0.0.1 ping statistics --- 00:13:29.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.916 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2950445 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2950445 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2950445 ']' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 [2024-12-06 15:30:35.316278] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:13:29.916 [2024-12-06 15:30:35.316321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.916 [2024-12-06 15:30:35.389692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.916 [2024-12-06 15:30:35.430223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.916 [2024-12-06 15:30:35.430260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.916 [2024-12-06 15:30:35.430268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.916 [2024-12-06 15:30:35.430274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.916 [2024-12-06 15:30:35.430279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.916 [2024-12-06 15:30:35.431799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.916 [2024-12-06 15:30:35.431908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.916 [2024-12-06 15:30:35.432014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.916 [2024-12-06 15:30:35.432015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16803 00:13:29.916 [2024-12-06 15:30:35.754719] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:29.916 { 00:13:29.916 "nqn": "nqn.2016-06.io.spdk:cnode16803", 00:13:29.916 "tgt_name": "foobar", 00:13:29.916 "method": "nvmf_create_subsystem", 00:13:29.916 "req_id": 1 00:13:29.916 } 00:13:29.916 Got JSON-RPC error response 00:13:29.916 response: 00:13:29.916 { 00:13:29.916 "code": -32603, 00:13:29.916 "message": "Unable to find target foobar" 00:13:29.916 }' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:29.916 { 00:13:29.916 "nqn": "nqn.2016-06.io.spdk:cnode16803", 00:13:29.916 "tgt_name": "foobar", 00:13:29.916 "method": "nvmf_create_subsystem", 00:13:29.916 "req_id": 1 00:13:29.916 } 00:13:29.916 Got JSON-RPC error response 00:13:29.916 response: 00:13:29.916 { 00:13:29.916 "code": -32603, 00:13:29.916 "message": "Unable to find target foobar" 00:13:29.916 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:29.916 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode825 00:13:30.174 [2024-12-06 15:30:35.967416] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode825: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:30.174 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:30.174 { 00:13:30.174 "nqn": "nqn.2016-06.io.spdk:cnode825", 00:13:30.174 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.174 "method": "nvmf_create_subsystem", 00:13:30.174 "req_id": 1 00:13:30.174 } 00:13:30.174 Got JSON-RPC error response 00:13:30.174 response: 00:13:30.174 { 00:13:30.174 "code": -32602, 00:13:30.174 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.174 }' 00:13:30.174 15:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:30.174 { 00:13:30.174 "nqn": "nqn.2016-06.io.spdk:cnode825", 00:13:30.174 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.174 "method": "nvmf_create_subsystem", 00:13:30.174 "req_id": 1 00:13:30.174 } 00:13:30.174 Got JSON-RPC error response 00:13:30.174 response: 00:13:30.174 { 00:13:30.174 "code": -32602, 00:13:30.174 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.174 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.174 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:30.174 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20210 00:13:30.432 [2024-12-06 15:30:36.172071] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20210: invalid model number 'SPDK_Controller' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:30.432 { 00:13:30.432 "nqn": "nqn.2016-06.io.spdk:cnode20210", 00:13:30.432 "model_number": "SPDK_Controller\u001f", 00:13:30.432 "method": "nvmf_create_subsystem", 00:13:30.432 "req_id": 1 00:13:30.432 } 00:13:30.432 Got JSON-RPC error response 00:13:30.432 response: 00:13:30.432 { 00:13:30.432 "code": -32602, 00:13:30.432 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.432 }' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:30.432 { 00:13:30.432 "nqn": "nqn.2016-06.io.spdk:cnode20210", 00:13:30.432 "model_number": "SPDK_Controller\u001f", 00:13:30.432 "method": "nvmf_create_subsystem", 00:13:30.432 "req_id": 1 00:13:30.432 } 00:13:30.432 Got JSON-RPC error response 00:13:30.432 response: 00:13:30.432 { 00:13:30.432 "code": -32602, 00:13:30.432 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.432 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.432 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''_FKuH}!E+Nk!~-;&{fBq' 00:13:30.433 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ''\''_FKuH}!E+Nk!~-;&{fBq' nqn.2016-06.io.spdk:cnode10229 00:13:30.692 [2024-12-06 15:30:36.509212] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10229: invalid serial number ''_FKuH}!E+Nk!~-;&{fBq' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:30.692 { 00:13:30.692 "nqn": "nqn.2016-06.io.spdk:cnode10229", 00:13:30.692 "serial_number": "'\''_FKuH}!E+Nk!~-;&{fBq", 00:13:30.692 "method": "nvmf_create_subsystem", 00:13:30.692 "req_id": 1 00:13:30.692 } 00:13:30.692 Got JSON-RPC error response 00:13:30.692 response: 00:13:30.692 { 00:13:30.692 "code": -32602, 00:13:30.692 "message": "Invalid SN '\''_FKuH}!E+Nk!~-;&{fBq" 00:13:30.692 }' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:30.692 { 00:13:30.692 "nqn": "nqn.2016-06.io.spdk:cnode10229", 00:13:30.692 "serial_number": "'_FKuH}!E+Nk!~-;&{fBq", 00:13:30.692 "method": "nvmf_create_subsystem", 00:13:30.692 "req_id": 1 00:13:30.692 } 00:13:30.692 Got JSON-RPC error response 00:13:30.692 response: 00:13:30.692 { 00:13:30.692 "code": -32602, 00:13:30.692 "message": "Invalid SN '_FKuH}!E+Nk!~-;&{fBq" 00:13:30.692 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.692 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.693 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:30.952 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=F &whL;kz%M\uT9X/KEwYt4x`z~ddsBL?K~8{(8}' 00:13:30.953 15:30:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '=F &whL;kz%M\uT9X/KEwYt4x`z~ddsBL?K~8{(8}' nqn.2016-06.io.spdk:cnode13560 00:13:31.212 [2024-12-06 15:30:36.990783] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13560: invalid model number '=F &whL;kz%M\uT9X/KEwYt4x`z~ddsBL?K~8{(8}' 00:13:31.212 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:31.212 { 00:13:31.212 "nqn": "nqn.2016-06.io.spdk:cnode13560", 00:13:31.212 "model_number": "=F &whL;kz%M\\uT9X/KEwYt4x`z~ddsBL?K~8{(8}", 00:13:31.212 "method": "nvmf_create_subsystem", 00:13:31.212 "req_id": 1 00:13:31.212 } 00:13:31.212 Got JSON-RPC error response 00:13:31.212 response: 00:13:31.212 { 00:13:31.212 "code": -32602, 00:13:31.212 "message": "Invalid MN =F &whL;kz%M\\uT9X/KEwYt4x`z~ddsBL?K~8{(8}" 00:13:31.212 }' 00:13:31.212 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:31.212 { 00:13:31.212 "nqn": "nqn.2016-06.io.spdk:cnode13560", 00:13:31.212 "model_number": "=F &whL;kz%M\\uT9X/KEwYt4x`z~ddsBL?K~8{(8}", 00:13:31.212 "method": "nvmf_create_subsystem", 00:13:31.212 "req_id": 1 00:13:31.212 } 00:13:31.212 Got JSON-RPC error response 00:13:31.212 response: 00:13:31.212 { 00:13:31.212 "code": -32602, 00:13:31.212 "message": "Invalid MN =F &whL;kz%M\\uT9X/KEwYt4x`z~ddsBL?K~8{(8}" 00:13:31.212 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.212 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:31.212 [2024-12-06 15:30:37.195539] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.470 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:31.471 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:31.471 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:31.471 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:31.471 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:31.471 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:31.729 [2024-12-06 15:30:37.602088] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:31.729 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:31.729 { 00:13:31.729 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.729 "listen_address": { 00:13:31.729 "trtype": "tcp", 00:13:31.729 "traddr": "", 00:13:31.729 "trsvcid": "4421" 00:13:31.729 }, 00:13:31.729 "method": "nvmf_subsystem_remove_listener", 00:13:31.729 "req_id": 1 00:13:31.729 } 00:13:31.729 Got JSON-RPC error response 00:13:31.729 response: 00:13:31.729 { 00:13:31.729 "code": -32602, 00:13:31.729 "message": "Invalid parameters" 00:13:31.729 }' 00:13:31.729 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:31.729 { 00:13:31.729 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.729 "listen_address": { 00:13:31.729 "trtype": "tcp", 00:13:31.729 "traddr": "", 00:13:31.729 "trsvcid": "4421" 00:13:31.729 }, 00:13:31.729 "method": "nvmf_subsystem_remove_listener", 00:13:31.729 "req_id": 1 00:13:31.729 } 00:13:31.729 Got JSON-RPC error response 00:13:31.729 response: 00:13:31.729 { 00:13:31.729 "code": -32602, 00:13:31.729 "message": "Invalid parameters" 00:13:31.729 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:31.729 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18285 -i 0 00:13:31.986 [2024-12-06 15:30:37.802721] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18285: invalid cntlid range [0-65519] 00:13:31.986 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:31.986 { 00:13:31.986 "nqn": "nqn.2016-06.io.spdk:cnode18285", 00:13:31.986 "min_cntlid": 0, 00:13:31.986 "method": "nvmf_create_subsystem", 00:13:31.986 "req_id": 1 00:13:31.986 } 00:13:31.986 Got JSON-RPC error response 00:13:31.986 response: 00:13:31.986 { 00:13:31.986 "code": -32602, 00:13:31.987 "message": "Invalid cntlid range [0-65519]" 00:13:31.987 }' 00:13:31.987 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:31.987 { 00:13:31.987 "nqn": "nqn.2016-06.io.spdk:cnode18285", 00:13:31.987 "min_cntlid": 0, 00:13:31.987 "method": "nvmf_create_subsystem", 00:13:31.987 "req_id": 1 00:13:31.987 } 00:13:31.987 Got JSON-RPC error response 00:13:31.987 response: 00:13:31.987 { 00:13:31.987 "code": -32602, 00:13:31.987 "message": "Invalid cntlid range [0-65519]" 00:13:31.987 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.987 15:30:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26933 -i 65520 00:13:32.243 [2024-12-06 15:30:37.999405] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26933: invalid cntlid range [65520-65519] 00:13:32.243 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:32.243 { 00:13:32.243 "nqn": "nqn.2016-06.io.spdk:cnode26933", 00:13:32.243 "min_cntlid": 65520, 00:13:32.243 "method": "nvmf_create_subsystem", 00:13:32.243 "req_id": 1 00:13:32.243 } 00:13:32.243 Got JSON-RPC error response 00:13:32.243 response: 00:13:32.243 { 00:13:32.243 "code": -32602, 00:13:32.243 "message": "Invalid cntlid range [65520-65519]" 00:13:32.243 }' 00:13:32.243 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:32.243 { 00:13:32.243 "nqn": "nqn.2016-06.io.spdk:cnode26933", 00:13:32.243 "min_cntlid": 65520, 00:13:32.243 "method": "nvmf_create_subsystem", 00:13:32.244 "req_id": 1 00:13:32.244 } 00:13:32.244 Got JSON-RPC error response 00:13:32.244 response: 00:13:32.244 { 00:13:32.244 "code": -32602, 00:13:32.244 "message": "Invalid cntlid range [65520-65519]" 00:13:32.244 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.244 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17558 -I 0 00:13:32.244 [2024-12-06 15:30:38.212105] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17558: invalid cntlid range [1-0] 00:13:32.500 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:32.500 { 00:13:32.500 "nqn": "nqn.2016-06.io.spdk:cnode17558", 00:13:32.500 "max_cntlid": 0, 00:13:32.500 "method": "nvmf_create_subsystem", 00:13:32.500 "req_id": 1 00:13:32.500 } 00:13:32.500 Got JSON-RPC error response 00:13:32.500 response: 00:13:32.500 { 00:13:32.500 "code": -32602, 00:13:32.500 "message": "Invalid cntlid range [1-0]" 00:13:32.500 }' 00:13:32.500 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:32.500 { 00:13:32.500 "nqn": "nqn.2016-06.io.spdk:cnode17558", 00:13:32.500 "max_cntlid": 0, 00:13:32.500 "method": "nvmf_create_subsystem", 00:13:32.500 "req_id": 1 00:13:32.500 } 00:13:32.500 Got JSON-RPC error response 00:13:32.500 response: 00:13:32.500 { 00:13:32.500 "code": -32602, 00:13:32.500 "message": "Invalid cntlid range [1-0]" 00:13:32.500 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.500 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20475 -I 65520 00:13:32.500 [2024-12-06 15:30:38.440873] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20475: invalid cntlid range [1-65520] 00:13:32.500 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:32.500 { 00:13:32.500 "nqn": "nqn.2016-06.io.spdk:cnode20475", 00:13:32.500 "max_cntlid": 65520, 00:13:32.500 "method": "nvmf_create_subsystem", 00:13:32.500 "req_id": 1 00:13:32.500 } 00:13:32.500 Got JSON-RPC error response 00:13:32.500 response: 00:13:32.500 { 00:13:32.500 "code": -32602, 00:13:32.500 "message": "Invalid cntlid range [1-65520]" 00:13:32.500 }' 00:13:32.500 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:32.500 { 00:13:32.500 "nqn": "nqn.2016-06.io.spdk:cnode20475", 00:13:32.500 "max_cntlid": 65520, 00:13:32.500 "method": "nvmf_create_subsystem", 00:13:32.500 "req_id": 1 00:13:32.500 } 00:13:32.500 Got JSON-RPC error response 00:13:32.500 response: 00:13:32.500 { 00:13:32.500 "code": -32602, 00:13:32.500 "message": "Invalid cntlid range [1-65520]" 00:13:32.500 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.500 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10167 -i 6 -I 5 00:13:32.756 [2024-12-06 15:30:38.637550] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10167: invalid cntlid range [6-5] 00:13:32.756 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:32.756 { 00:13:32.756 "nqn": "nqn.2016-06.io.spdk:cnode10167", 00:13:32.756 "min_cntlid": 6, 00:13:32.756 "max_cntlid": 5, 00:13:32.756 "method": "nvmf_create_subsystem", 00:13:32.756 "req_id": 1 00:13:32.756 } 00:13:32.756 Got JSON-RPC error response 00:13:32.756 response: 00:13:32.756 { 00:13:32.756 "code": -32602, 00:13:32.756 "message": "Invalid cntlid range [6-5]" 00:13:32.756 }' 00:13:32.756 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:32.756 { 00:13:32.756 "nqn": "nqn.2016-06.io.spdk:cnode10167", 00:13:32.756 "min_cntlid": 6, 00:13:32.756 "max_cntlid": 5, 00:13:32.756 "method": "nvmf_create_subsystem", 00:13:32.756 "req_id": 1 00:13:32.756 } 00:13:32.756 Got JSON-RPC error response 00:13:32.756 response: 00:13:32.756 { 00:13:32.756 "code": -32602, 00:13:32.756 "message": "Invalid cntlid range [6-5]" 00:13:32.756 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.756 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:33.012 { 00:13:33.012 "name": "foobar", 00:13:33.012 "method": "nvmf_delete_target", 00:13:33.012 "req_id": 1 00:13:33.012 } 00:13:33.012 Got JSON-RPC error response 00:13:33.012 response: 00:13:33.012 { 00:13:33.012 "code": -32602, 00:13:33.012 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:33.012 }' 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:33.012 { 00:13:33.012 "name": "foobar", 00:13:33.012 "method": "nvmf_delete_target", 00:13:33.012 "req_id": 1 00:13:33.012 } 00:13:33.012 Got JSON-RPC error response 00:13:33.012 response: 00:13:33.012 { 00:13:33.012 "code": -32602, 00:13:33.012 "message": "The specified target doesn't exist, cannot delete it." 00:13:33.012 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.012 rmmod nvme_tcp 00:13:33.012 rmmod nvme_fabrics 00:13:33.012 rmmod nvme_keyring 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2950445 ']' 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2950445 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2950445 ']' 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2950445 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2950445 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2950445' 00:13:33.012 killing process with pid 2950445 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2950445 00:13:33.012 15:30:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2950445 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.271 15:30:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.167 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:35.167 00:13:35.167 real 0m12.030s 00:13:35.167 user 0m18.639s 00:13:35.167 sys 0m5.452s 00:13:35.167 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.167 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.167 ************************************ 00:13:35.167 END TEST nvmf_invalid 00:13:35.167 ************************************ 00:13:35.167 15:30:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:35.167 15:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.167 15:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.167 15:30:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.453 ************************************ 00:13:35.453 START TEST nvmf_connect_stress 00:13:35.453 ************************************ 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:35.453 * Looking for test storage... 00:13:35.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.453 --rc genhtml_branch_coverage=1 00:13:35.453 --rc genhtml_function_coverage=1 00:13:35.453 --rc genhtml_legend=1 00:13:35.453 --rc geninfo_all_blocks=1 00:13:35.453 --rc geninfo_unexecuted_blocks=1 00:13:35.453 00:13:35.453 ' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.453 --rc genhtml_branch_coverage=1 00:13:35.453 --rc genhtml_function_coverage=1 00:13:35.453 --rc genhtml_legend=1 00:13:35.453 --rc geninfo_all_blocks=1 00:13:35.453 --rc geninfo_unexecuted_blocks=1 00:13:35.453 00:13:35.453 ' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.453 --rc genhtml_branch_coverage=1 00:13:35.453 --rc genhtml_function_coverage=1 00:13:35.453 --rc genhtml_legend=1 00:13:35.453 --rc geninfo_all_blocks=1 00:13:35.453 --rc geninfo_unexecuted_blocks=1 00:13:35.453 00:13:35.453 ' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.453 --rc genhtml_branch_coverage=1 00:13:35.453 --rc genhtml_function_coverage=1 00:13:35.453 --rc genhtml_legend=1 00:13:35.453 --rc geninfo_all_blocks=1 00:13:35.453 --rc geninfo_unexecuted_blocks=1 00:13:35.453 00:13:35.453 ' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.453 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.454 15:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.017 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.017 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.017 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.017 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.017 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:13:42.018 00:13:42.018 --- 10.0.0.2 ping statistics --- 00:13:42.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.018 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:13:42.018 00:13:42.018 --- 10.0.0.1 ping statistics --- 00:13:42.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.018 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2954827 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2954827 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2954827 ']' 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.018 [2024-12-06 15:30:47.394539] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:13:42.018 [2024-12-06 15:30:47.394585] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.018 [2024-12-06 15:30:47.471334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.018 [2024-12-06 15:30:47.510109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.018 [2024-12-06 15:30:47.510146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.018 [2024-12-06 15:30:47.510152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.018 [2024-12-06 15:30:47.510158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.018 [2024-12-06 15:30:47.510163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.018 [2024-12-06 15:30:47.511497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.018 [2024-12-06 15:30:47.511582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.018 [2024-12-06 15:30:47.511583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.018 [2024-12-06 15:30:47.660699] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.018 [2024-12-06 15:30:47.676920] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.018 NULL1 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2954848 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.018 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.019 15:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.277 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.277 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:42.277 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.277 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.277 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.535 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.535 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:42.535 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.535 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.535 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.793 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.793 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:42.793 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.793 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.793 15:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.357 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.357 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:43.357 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.357 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.357 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.614 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.614 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:43.614 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.614 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.614 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.872 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.872 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:43.872 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.872 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.872 15:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.129 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.129 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:44.129 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.129 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.129 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.387 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.387 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:44.387 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.387 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.387 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.952 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.952 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:44.952 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.952 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.952 15:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.208 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.208 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:45.208 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.208 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.208 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.465 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.465 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:45.465 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.465 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.465 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.722 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.722 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:45.722 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.722 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.722 15:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.286 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.286 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:46.286 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.286 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.286 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.544 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.544 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:46.544 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.544 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.544 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.801 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.801 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:46.801 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.801 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.801 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.059 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.059 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:47.059 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.059 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.059 15:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.316 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.316 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:47.316 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.316 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.316 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.979 15:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.572 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.572 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:48.572 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.572 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.572 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.846 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.846 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:48.846 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.846 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.846 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.104 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.104 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:49.104 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.104 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.104 15:30:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.361 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.361 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:49.361 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.361 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.361 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.618 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.618 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:49.618 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.618 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.618 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.180 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.180 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:50.180 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.180 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.180 15:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.436 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.436 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:50.436 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.436 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.436 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.694 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.694 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:50.694 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.694 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.694 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.951 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.951 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:50.951 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.951 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.951 15:30:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.514 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.514 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:51.514 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.514 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.514 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.771 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.771 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:51.771 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.771 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.771 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.028 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954848 00:13:52.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2954848) - No such process 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2954848 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.028 rmmod nvme_tcp 00:13:52.028 rmmod nvme_fabrics 00:13:52.028 rmmod nvme_keyring 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2954827 ']' 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2954827 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2954827 ']' 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2954827 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954827 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954827' 00:13:52.028 killing process with pid 2954827 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2954827 00:13:52.028 15:30:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2954827 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.287 15:30:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.823 00:13:54.823 real 0m19.042s 00:13:54.823 user 0m39.357s 00:13:54.823 sys 0m8.521s 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 ************************************ 00:13:54.823 END TEST nvmf_connect_stress 00:13:54.823 ************************************ 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.823 ************************************ 00:13:54.823 START TEST nvmf_fused_ordering 00:13:54.823 ************************************ 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:54.823 * Looking for test storage... 00:13:54.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.823 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.824 --rc genhtml_branch_coverage=1 00:13:54.824 --rc genhtml_function_coverage=1 00:13:54.824 --rc genhtml_legend=1 00:13:54.824 --rc geninfo_all_blocks=1 00:13:54.824 --rc geninfo_unexecuted_blocks=1 00:13:54.824 00:13:54.824 ' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.824 --rc genhtml_branch_coverage=1 00:13:54.824 --rc genhtml_function_coverage=1 00:13:54.824 --rc genhtml_legend=1 00:13:54.824 --rc geninfo_all_blocks=1 00:13:54.824 --rc geninfo_unexecuted_blocks=1 00:13:54.824 00:13:54.824 ' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.824 --rc genhtml_branch_coverage=1 00:13:54.824 --rc genhtml_function_coverage=1 00:13:54.824 --rc genhtml_legend=1 00:13:54.824 --rc geninfo_all_blocks=1 00:13:54.824 --rc geninfo_unexecuted_blocks=1 00:13:54.824 00:13:54.824 ' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:54.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.824 --rc genhtml_branch_coverage=1 00:13:54.824 --rc genhtml_function_coverage=1 00:13:54.824 --rc genhtml_legend=1 00:13:54.824 --rc geninfo_all_blocks=1 00:13:54.824 --rc geninfo_unexecuted_blocks=1 00:13:54.824 00:13:54.824 ' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.824 15:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:01.398 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:01.398 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:01.398 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:01.399 Found net devices under 0000:86:00.0: cvl_0_0 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:01.399 Found net devices under 0000:86:00.1: cvl_0_1 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:01.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:14:01.399 00:14:01.399 --- 10.0.0.2 ping statistics --- 00:14:01.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.399 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:01.399 00:14:01.399 --- 10.0.0.1 ping statistics --- 00:14:01.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.399 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2960166 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2960166 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2960166 ']' 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.399 [2024-12-06 15:31:06.574494] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:14:01.399 [2024-12-06 15:31:06.574551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.399 [2024-12-06 15:31:06.652025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.399 [2024-12-06 15:31:06.693220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.399 [2024-12-06 15:31:06.693254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.399 [2024-12-06 15:31:06.693261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.399 [2024-12-06 15:31:06.693267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.399 [2024-12-06 15:31:06.693272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.399 [2024-12-06 15:31:06.693821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.399 [2024-12-06 15:31:06.830030] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.399 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.400 [2024-12-06 15:31:06.850223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.400 NULL1 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.400 15:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:01.400 [2024-12-06 15:31:06.908986] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:14:01.400 [2024-12-06 15:31:06.909015] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960248 ] 00:14:01.400 Attached to nqn.2016-06.io.spdk:cnode1 00:14:01.400 Namespace ID: 1 size: 1GB 00:14:01.400 fused_ordering(0) 00:14:01.400 fused_ordering(1) 00:14:01.400 fused_ordering(2) 00:14:01.400 fused_ordering(3) 00:14:01.400 fused_ordering(4) 00:14:01.400 fused_ordering(5) 00:14:01.400 fused_ordering(6) 00:14:01.400 fused_ordering(7) 00:14:01.400 fused_ordering(8) 00:14:01.400 fused_ordering(9) 00:14:01.400 fused_ordering(10) 00:14:01.400 fused_ordering(11) 00:14:01.400 fused_ordering(12) 00:14:01.400 fused_ordering(13) 00:14:01.400 fused_ordering(14) 00:14:01.400 fused_ordering(15) 00:14:01.400 fused_ordering(16) 00:14:01.400 fused_ordering(17) 00:14:01.400 fused_ordering(18) 00:14:01.400 fused_ordering(19) 00:14:01.400 fused_ordering(20) 00:14:01.400 fused_ordering(21) 00:14:01.400 fused_ordering(22) 00:14:01.400 fused_ordering(23) 00:14:01.400 fused_ordering(24) 00:14:01.400 fused_ordering(25) 00:14:01.400 fused_ordering(26) 00:14:01.400 fused_ordering(27) 00:14:01.400 fused_ordering(28) 00:14:01.400 fused_ordering(29) 00:14:01.400 fused_ordering(30) 00:14:01.400 fused_ordering(31) 00:14:01.400 fused_ordering(32) 00:14:01.400 fused_ordering(33) 00:14:01.400 fused_ordering(34) 00:14:01.400 fused_ordering(35) 00:14:01.400 fused_ordering(36) 00:14:01.400 fused_ordering(37) 00:14:01.400 fused_ordering(38) 00:14:01.400 fused_ordering(39) 00:14:01.400 fused_ordering(40) 00:14:01.400 fused_ordering(41) 00:14:01.400 fused_ordering(42) 00:14:01.400 fused_ordering(43) 00:14:01.400 fused_ordering(44) 00:14:01.400 fused_ordering(45) 00:14:01.400 fused_ordering(46) 00:14:01.400 fused_ordering(47) 00:14:01.400 fused_ordering(48) 00:14:01.400 fused_ordering(49) 00:14:01.400 fused_ordering(50) 00:14:01.400 fused_ordering(51) 00:14:01.400 fused_ordering(52) 00:14:01.400 fused_ordering(53) 00:14:01.400 fused_ordering(54) 00:14:01.400 fused_ordering(55) 00:14:01.400 fused_ordering(56) 00:14:01.400 fused_ordering(57) 00:14:01.400 fused_ordering(58) 00:14:01.400 fused_ordering(59) 00:14:01.400 fused_ordering(60) 00:14:01.400 fused_ordering(61) 00:14:01.400 fused_ordering(62) 00:14:01.400 fused_ordering(63) 00:14:01.400 fused_ordering(64) 00:14:01.400 fused_ordering(65) 00:14:01.400 fused_ordering(66) 00:14:01.400 fused_ordering(67) 00:14:01.400 fused_ordering(68) 00:14:01.400 fused_ordering(69) 00:14:01.400 fused_ordering(70) 00:14:01.400 fused_ordering(71) 00:14:01.400 fused_ordering(72) 00:14:01.400 fused_ordering(73) 00:14:01.400 fused_ordering(74) 00:14:01.400 fused_ordering(75) 00:14:01.400 fused_ordering(76) 00:14:01.400 fused_ordering(77) 00:14:01.400 fused_ordering(78) 00:14:01.400 fused_ordering(79) 00:14:01.400 fused_ordering(80) 00:14:01.400 fused_ordering(81) 00:14:01.400 fused_ordering(82) 00:14:01.400 fused_ordering(83) 00:14:01.400 fused_ordering(84) 00:14:01.400 fused_ordering(85) 00:14:01.400 fused_ordering(86) 00:14:01.400 fused_ordering(87) 00:14:01.400 fused_ordering(88) 00:14:01.400 fused_ordering(89) 00:14:01.400 fused_ordering(90) 00:14:01.400 fused_ordering(91) 00:14:01.400 fused_ordering(92) 00:14:01.400 fused_ordering(93) 00:14:01.400 fused_ordering(94) 00:14:01.400 fused_ordering(95) 00:14:01.400 fused_ordering(96) 00:14:01.400 fused_ordering(97) 00:14:01.400 fused_ordering(98) 00:14:01.400 fused_ordering(99) 00:14:01.400 fused_ordering(100) 00:14:01.400 fused_ordering(101) 00:14:01.400 fused_ordering(102) 00:14:01.400 fused_ordering(103) 00:14:01.400 fused_ordering(104) 00:14:01.400 fused_ordering(105) 00:14:01.400 fused_ordering(106) 00:14:01.400 fused_ordering(107) 00:14:01.400 fused_ordering(108) 00:14:01.400 fused_ordering(109) 00:14:01.400 fused_ordering(110) 00:14:01.400 fused_ordering(111) 00:14:01.400 fused_ordering(112) 00:14:01.400 fused_ordering(113) 00:14:01.400 fused_ordering(114) 00:14:01.400 fused_ordering(115) 00:14:01.400 fused_ordering(116) 00:14:01.400 fused_ordering(117) 00:14:01.400 fused_ordering(118) 00:14:01.400 fused_ordering(119) 00:14:01.400 fused_ordering(120) 00:14:01.400 fused_ordering(121) 00:14:01.400 fused_ordering(122) 00:14:01.400 fused_ordering(123) 00:14:01.400 fused_ordering(124) 00:14:01.400 fused_ordering(125) 00:14:01.400 fused_ordering(126) 00:14:01.400 fused_ordering(127) 00:14:01.400 fused_ordering(128) 00:14:01.400 fused_ordering(129) 00:14:01.400 fused_ordering(130) 00:14:01.400 fused_ordering(131) 00:14:01.400 fused_ordering(132) 00:14:01.400 fused_ordering(133) 00:14:01.400 fused_ordering(134) 00:14:01.400 fused_ordering(135) 00:14:01.400 fused_ordering(136) 00:14:01.400 fused_ordering(137) 00:14:01.400 fused_ordering(138) 00:14:01.400 fused_ordering(139) 00:14:01.400 fused_ordering(140) 00:14:01.400 fused_ordering(141) 00:14:01.400 fused_ordering(142) 00:14:01.400 fused_ordering(143) 00:14:01.400 fused_ordering(144) 00:14:01.400 fused_ordering(145) 00:14:01.400 fused_ordering(146) 00:14:01.400 fused_ordering(147) 00:14:01.400 fused_ordering(148) 00:14:01.400 fused_ordering(149) 00:14:01.400 fused_ordering(150) 00:14:01.400 fused_ordering(151) 00:14:01.400 fused_ordering(152) 00:14:01.400 fused_ordering(153) 00:14:01.400 fused_ordering(154) 00:14:01.400 fused_ordering(155) 00:14:01.400 fused_ordering(156) 00:14:01.400 fused_ordering(157) 00:14:01.400 fused_ordering(158) 00:14:01.400 fused_ordering(159) 00:14:01.400 fused_ordering(160) 00:14:01.400 fused_ordering(161) 00:14:01.400 fused_ordering(162) 00:14:01.400 fused_ordering(163) 00:14:01.400 fused_ordering(164) 00:14:01.400 fused_ordering(165) 00:14:01.400 fused_ordering(166) 00:14:01.400 fused_ordering(167) 00:14:01.400 fused_ordering(168) 00:14:01.400 fused_ordering(169) 00:14:01.400 fused_ordering(170) 00:14:01.400 fused_ordering(171) 00:14:01.400 fused_ordering(172) 00:14:01.400 fused_ordering(173) 00:14:01.400 fused_ordering(174) 00:14:01.400 fused_ordering(175) 00:14:01.400 fused_ordering(176) 00:14:01.400 fused_ordering(177) 00:14:01.400 fused_ordering(178) 00:14:01.400 fused_ordering(179) 00:14:01.400 fused_ordering(180) 00:14:01.400 fused_ordering(181) 00:14:01.400 fused_ordering(182) 00:14:01.400 fused_ordering(183) 00:14:01.400 fused_ordering(184) 00:14:01.400 fused_ordering(185) 00:14:01.400 fused_ordering(186) 00:14:01.400 fused_ordering(187) 00:14:01.400 fused_ordering(188) 00:14:01.400 fused_ordering(189) 00:14:01.400 fused_ordering(190) 00:14:01.400 fused_ordering(191) 00:14:01.400 fused_ordering(192) 00:14:01.400 fused_ordering(193) 00:14:01.400 fused_ordering(194) 00:14:01.400 fused_ordering(195) 00:14:01.400 fused_ordering(196) 00:14:01.400 fused_ordering(197) 00:14:01.400 fused_ordering(198) 00:14:01.400 fused_ordering(199) 00:14:01.400 fused_ordering(200) 00:14:01.400 fused_ordering(201) 00:14:01.400 fused_ordering(202) 00:14:01.400 fused_ordering(203) 00:14:01.400 fused_ordering(204) 00:14:01.400 fused_ordering(205) 00:14:01.660 fused_ordering(206) 00:14:01.660 fused_ordering(207) 00:14:01.660 fused_ordering(208) 00:14:01.660 fused_ordering(209) 00:14:01.660 fused_ordering(210) 00:14:01.660 fused_ordering(211) 00:14:01.660 fused_ordering(212) 00:14:01.660 fused_ordering(213) 00:14:01.660 fused_ordering(214) 00:14:01.660 fused_ordering(215) 00:14:01.660 fused_ordering(216) 00:14:01.660 fused_ordering(217) 00:14:01.660 fused_ordering(218) 00:14:01.660 fused_ordering(219) 00:14:01.660 fused_ordering(220) 00:14:01.660 fused_ordering(221) 00:14:01.660 fused_ordering(222) 00:14:01.660 fused_ordering(223) 00:14:01.660 fused_ordering(224) 00:14:01.660 fused_ordering(225) 00:14:01.660 fused_ordering(226) 00:14:01.660 fused_ordering(227) 00:14:01.660 fused_ordering(228) 00:14:01.660 fused_ordering(229) 00:14:01.660 fused_ordering(230) 00:14:01.660 fused_ordering(231) 00:14:01.660 fused_ordering(232) 00:14:01.660 fused_ordering(233) 00:14:01.660 fused_ordering(234) 00:14:01.660 fused_ordering(235) 00:14:01.660 fused_ordering(236) 00:14:01.660 fused_ordering(237) 00:14:01.660 fused_ordering(238) 00:14:01.660 fused_ordering(239) 00:14:01.660 fused_ordering(240) 00:14:01.660 fused_ordering(241) 00:14:01.660 fused_ordering(242) 00:14:01.660 fused_ordering(243) 00:14:01.660 fused_ordering(244) 00:14:01.660 fused_ordering(245) 00:14:01.660 fused_ordering(246) 00:14:01.660 fused_ordering(247) 00:14:01.660 fused_ordering(248) 00:14:01.660 fused_ordering(249) 00:14:01.660 fused_ordering(250) 00:14:01.660 fused_ordering(251) 00:14:01.660 fused_ordering(252) 00:14:01.660 fused_ordering(253) 00:14:01.660 fused_ordering(254) 00:14:01.660 fused_ordering(255) 00:14:01.660 fused_ordering(256) 00:14:01.660 fused_ordering(257) 00:14:01.660 fused_ordering(258) 00:14:01.660 fused_ordering(259) 00:14:01.660 fused_ordering(260) 00:14:01.660 fused_ordering(261) 00:14:01.660 fused_ordering(262) 00:14:01.660 fused_ordering(263) 00:14:01.660 fused_ordering(264) 00:14:01.660 fused_ordering(265) 00:14:01.660 fused_ordering(266) 00:14:01.660 fused_ordering(267) 00:14:01.660 fused_ordering(268) 00:14:01.660 fused_ordering(269) 00:14:01.660 fused_ordering(270) 00:14:01.660 fused_ordering(271) 00:14:01.660 fused_ordering(272) 00:14:01.660 fused_ordering(273) 00:14:01.660 fused_ordering(274) 00:14:01.660 fused_ordering(275) 00:14:01.660 fused_ordering(276) 00:14:01.660 fused_ordering(277) 00:14:01.660 fused_ordering(278) 00:14:01.660 fused_ordering(279) 00:14:01.660 fused_ordering(280) 00:14:01.660 fused_ordering(281) 00:14:01.660 fused_ordering(282) 00:14:01.660 fused_ordering(283) 00:14:01.660 fused_ordering(284) 00:14:01.660 fused_ordering(285) 00:14:01.660 fused_ordering(286) 00:14:01.660 fused_ordering(287) 00:14:01.660 fused_ordering(288) 00:14:01.660 fused_ordering(289) 00:14:01.660 fused_ordering(290) 00:14:01.660 fused_ordering(291) 00:14:01.660 fused_ordering(292) 00:14:01.660 fused_ordering(293) 00:14:01.660 fused_ordering(294) 00:14:01.660 fused_ordering(295) 00:14:01.660 fused_ordering(296) 00:14:01.660 fused_ordering(297) 00:14:01.660 fused_ordering(298) 00:14:01.660 fused_ordering(299) 00:14:01.660 fused_ordering(300) 00:14:01.660 fused_ordering(301) 00:14:01.660 fused_ordering(302) 00:14:01.660 fused_ordering(303) 00:14:01.660 fused_ordering(304) 00:14:01.660 fused_ordering(305) 00:14:01.660 fused_ordering(306) 00:14:01.660 fused_ordering(307) 00:14:01.660 fused_ordering(308) 00:14:01.660 fused_ordering(309) 00:14:01.660 fused_ordering(310) 00:14:01.660 fused_ordering(311) 00:14:01.660 fused_ordering(312) 00:14:01.660 fused_ordering(313) 00:14:01.660 fused_ordering(314) 00:14:01.660 fused_ordering(315) 00:14:01.660 fused_ordering(316) 00:14:01.660 fused_ordering(317) 00:14:01.660 fused_ordering(318) 00:14:01.660 fused_ordering(319) 00:14:01.660 fused_ordering(320) 00:14:01.660 fused_ordering(321) 00:14:01.660 fused_ordering(322) 00:14:01.660 fused_ordering(323) 00:14:01.660 fused_ordering(324) 00:14:01.660 fused_ordering(325) 00:14:01.660 fused_ordering(326) 00:14:01.660 fused_ordering(327) 00:14:01.660 fused_ordering(328) 00:14:01.660 fused_ordering(329) 00:14:01.660 fused_ordering(330) 00:14:01.660 fused_ordering(331) 00:14:01.660 fused_ordering(332) 00:14:01.660 fused_ordering(333) 00:14:01.660 fused_ordering(334) 00:14:01.660 fused_ordering(335) 00:14:01.660 fused_ordering(336) 00:14:01.660 fused_ordering(337) 00:14:01.660 fused_ordering(338) 00:14:01.660 fused_ordering(339) 00:14:01.660 fused_ordering(340) 00:14:01.660 fused_ordering(341) 00:14:01.660 fused_ordering(342) 00:14:01.660 fused_ordering(343) 00:14:01.660 fused_ordering(344) 00:14:01.660 fused_ordering(345) 00:14:01.660 fused_ordering(346) 00:14:01.660 fused_ordering(347) 00:14:01.660 fused_ordering(348) 00:14:01.660 fused_ordering(349) 00:14:01.660 fused_ordering(350) 00:14:01.660 fused_ordering(351) 00:14:01.660 fused_ordering(352) 00:14:01.660 fused_ordering(353) 00:14:01.660 fused_ordering(354) 00:14:01.660 fused_ordering(355) 00:14:01.660 fused_ordering(356) 00:14:01.660 fused_ordering(357) 00:14:01.660 fused_ordering(358) 00:14:01.660 fused_ordering(359) 00:14:01.660 fused_ordering(360) 00:14:01.660 fused_ordering(361) 00:14:01.660 fused_ordering(362) 00:14:01.661 fused_ordering(363) 00:14:01.661 fused_ordering(364) 00:14:01.661 fused_ordering(365) 00:14:01.661 fused_ordering(366) 00:14:01.661 fused_ordering(367) 00:14:01.661 fused_ordering(368) 00:14:01.661 fused_ordering(369) 00:14:01.661 fused_ordering(370) 00:14:01.661 fused_ordering(371) 00:14:01.661 fused_ordering(372) 00:14:01.661 fused_ordering(373) 00:14:01.661 fused_ordering(374) 00:14:01.661 fused_ordering(375) 00:14:01.661 fused_ordering(376) 00:14:01.661 fused_ordering(377) 00:14:01.661 fused_ordering(378) 00:14:01.661 fused_ordering(379) 00:14:01.661 fused_ordering(380) 00:14:01.661 fused_ordering(381) 00:14:01.661 fused_ordering(382) 00:14:01.661 fused_ordering(383) 00:14:01.661 fused_ordering(384) 00:14:01.661 fused_ordering(385) 00:14:01.661 fused_ordering(386) 00:14:01.661 fused_ordering(387) 00:14:01.661 fused_ordering(388) 00:14:01.661 fused_ordering(389) 00:14:01.661 fused_ordering(390) 00:14:01.661 fused_ordering(391) 00:14:01.661 fused_ordering(392) 00:14:01.661 fused_ordering(393) 00:14:01.661 fused_ordering(394) 00:14:01.661 fused_ordering(395) 00:14:01.661 fused_ordering(396) 00:14:01.661 fused_ordering(397) 00:14:01.661 fused_ordering(398) 00:14:01.661 fused_ordering(399) 00:14:01.661 fused_ordering(400) 00:14:01.661 fused_ordering(401) 00:14:01.661 fused_ordering(402) 00:14:01.661 fused_ordering(403) 00:14:01.661 fused_ordering(404) 00:14:01.661 fused_ordering(405) 00:14:01.661 fused_ordering(406) 00:14:01.661 fused_ordering(407) 00:14:01.661 fused_ordering(408) 00:14:01.661 fused_ordering(409) 00:14:01.661 fused_ordering(410) 00:14:01.919 fused_ordering(411) 00:14:01.919 fused_ordering(412) 00:14:01.919 fused_ordering(413) 00:14:01.919 fused_ordering(414) 00:14:01.919 fused_ordering(415) 00:14:01.919 fused_ordering(416) 00:14:01.919 fused_ordering(417) 00:14:01.919 fused_ordering(418) 00:14:01.919 fused_ordering(419) 00:14:01.919 fused_ordering(420) 00:14:01.919 fused_ordering(421) 00:14:01.919 fused_ordering(422) 00:14:01.919 fused_ordering(423) 00:14:01.919 fused_ordering(424) 00:14:01.919 fused_ordering(425) 00:14:01.919 fused_ordering(426) 00:14:01.919 fused_ordering(427) 00:14:01.919 fused_ordering(428) 00:14:01.919 fused_ordering(429) 00:14:01.919 fused_ordering(430) 00:14:01.919 fused_ordering(431) 00:14:01.919 fused_ordering(432) 00:14:01.919 fused_ordering(433) 00:14:01.919 fused_ordering(434) 00:14:01.919 fused_ordering(435) 00:14:01.919 fused_ordering(436) 00:14:01.919 fused_ordering(437) 00:14:01.919 fused_ordering(438) 00:14:01.919 fused_ordering(439) 00:14:01.919 fused_ordering(440) 00:14:01.919 fused_ordering(441) 00:14:01.919 fused_ordering(442) 00:14:01.919 fused_ordering(443) 00:14:01.919 fused_ordering(444) 00:14:01.919 fused_ordering(445) 00:14:01.919 fused_ordering(446) 00:14:01.919 fused_ordering(447) 00:14:01.919 fused_ordering(448) 00:14:01.919 fused_ordering(449) 00:14:01.919 fused_ordering(450) 00:14:01.919 fused_ordering(451) 00:14:01.919 fused_ordering(452) 00:14:01.919 fused_ordering(453) 00:14:01.919 fused_ordering(454) 00:14:01.919 fused_ordering(455) 00:14:01.919 fused_ordering(456) 00:14:01.919 fused_ordering(457) 00:14:01.919 fused_ordering(458) 00:14:01.919 fused_ordering(459) 00:14:01.919 fused_ordering(460) 00:14:01.919 fused_ordering(461) 00:14:01.919 fused_ordering(462) 00:14:01.919 fused_ordering(463) 00:14:01.919 fused_ordering(464) 00:14:01.919 fused_ordering(465) 00:14:01.919 fused_ordering(466) 00:14:01.919 fused_ordering(467) 00:14:01.920 fused_ordering(468) 00:14:01.920 fused_ordering(469) 00:14:01.920 fused_ordering(470) 00:14:01.920 fused_ordering(471) 00:14:01.920 fused_ordering(472) 00:14:01.920 fused_ordering(473) 00:14:01.920 fused_ordering(474) 00:14:01.920 fused_ordering(475) 00:14:01.920 fused_ordering(476) 00:14:01.920 fused_ordering(477) 00:14:01.920 fused_ordering(478) 00:14:01.920 fused_ordering(479) 00:14:01.920 fused_ordering(480) 00:14:01.920 fused_ordering(481) 00:14:01.920 fused_ordering(482) 00:14:01.920 fused_ordering(483) 00:14:01.920 fused_ordering(484) 00:14:01.920 fused_ordering(485) 00:14:01.920 fused_ordering(486) 00:14:01.920 fused_ordering(487) 00:14:01.920 fused_ordering(488) 00:14:01.920 fused_ordering(489) 00:14:01.920 fused_ordering(490) 00:14:01.920 fused_ordering(491) 00:14:01.920 fused_ordering(492) 00:14:01.920 fused_ordering(493) 00:14:01.920 fused_ordering(494) 00:14:01.920 fused_ordering(495) 00:14:01.920 fused_ordering(496) 00:14:01.920 fused_ordering(497) 00:14:01.920 fused_ordering(498) 00:14:01.920 fused_ordering(499) 00:14:01.920 fused_ordering(500) 00:14:01.920 fused_ordering(501) 00:14:01.920 fused_ordering(502) 00:14:01.920 fused_ordering(503) 00:14:01.920 fused_ordering(504) 00:14:01.920 fused_ordering(505) 00:14:01.920 fused_ordering(506) 00:14:01.920 fused_ordering(507) 00:14:01.920 fused_ordering(508) 00:14:01.920 fused_ordering(509) 00:14:01.920 fused_ordering(510) 00:14:01.920 fused_ordering(511) 00:14:01.920 fused_ordering(512) 00:14:01.920 fused_ordering(513) 00:14:01.920 fused_ordering(514) 00:14:01.920 fused_ordering(515) 00:14:01.920 fused_ordering(516) 00:14:01.920 fused_ordering(517) 00:14:01.920 fused_ordering(518) 00:14:01.920 fused_ordering(519) 00:14:01.920 fused_ordering(520) 00:14:01.920 fused_ordering(521) 00:14:01.920 fused_ordering(522) 00:14:01.920 fused_ordering(523) 00:14:01.920 fused_ordering(524) 00:14:01.920 fused_ordering(525) 00:14:01.920 fused_ordering(526) 00:14:01.920 fused_ordering(527) 00:14:01.920 fused_ordering(528) 00:14:01.920 fused_ordering(529) 00:14:01.920 fused_ordering(530) 00:14:01.920 fused_ordering(531) 00:14:01.920 fused_ordering(532) 00:14:01.920 fused_ordering(533) 00:14:01.920 fused_ordering(534) 00:14:01.920 fused_ordering(535) 00:14:01.920 fused_ordering(536) 00:14:01.920 fused_ordering(537) 00:14:01.920 fused_ordering(538) 00:14:01.920 fused_ordering(539) 00:14:01.920 fused_ordering(540) 00:14:01.920 fused_ordering(541) 00:14:01.920 fused_ordering(542) 00:14:01.920 fused_ordering(543) 00:14:01.920 fused_ordering(544) 00:14:01.920 fused_ordering(545) 00:14:01.920 fused_ordering(546) 00:14:01.920 fused_ordering(547) 00:14:01.920 fused_ordering(548) 00:14:01.920 fused_ordering(549) 00:14:01.920 fused_ordering(550) 00:14:01.920 fused_ordering(551) 00:14:01.920 fused_ordering(552) 00:14:01.920 fused_ordering(553) 00:14:01.920 fused_ordering(554) 00:14:01.920 fused_ordering(555) 00:14:01.920 fused_ordering(556) 00:14:01.920 fused_ordering(557) 00:14:01.920 fused_ordering(558) 00:14:01.920 fused_ordering(559) 00:14:01.920 fused_ordering(560) 00:14:01.920 fused_ordering(561) 00:14:01.920 fused_ordering(562) 00:14:01.920 fused_ordering(563) 00:14:01.920 fused_ordering(564) 00:14:01.920 fused_ordering(565) 00:14:01.920 fused_ordering(566) 00:14:01.920 fused_ordering(567) 00:14:01.920 fused_ordering(568) 00:14:01.920 fused_ordering(569) 00:14:01.920 fused_ordering(570) 00:14:01.920 fused_ordering(571) 00:14:01.920 fused_ordering(572) 00:14:01.920 fused_ordering(573) 00:14:01.920 fused_ordering(574) 00:14:01.920 fused_ordering(575) 00:14:01.920 fused_ordering(576) 00:14:01.920 fused_ordering(577) 00:14:01.920 fused_ordering(578) 00:14:01.920 fused_ordering(579) 00:14:01.920 fused_ordering(580) 00:14:01.920 fused_ordering(581) 00:14:01.920 fused_ordering(582) 00:14:01.920 fused_ordering(583) 00:14:01.920 fused_ordering(584) 00:14:01.920 fused_ordering(585) 00:14:01.920 fused_ordering(586) 00:14:01.920 fused_ordering(587) 00:14:01.920 fused_ordering(588) 00:14:01.920 fused_ordering(589) 00:14:01.920 fused_ordering(590) 00:14:01.920 fused_ordering(591) 00:14:01.920 fused_ordering(592) 00:14:01.920 fused_ordering(593) 00:14:01.920 fused_ordering(594) 00:14:01.920 fused_ordering(595) 00:14:01.920 fused_ordering(596) 00:14:01.920 fused_ordering(597) 00:14:01.920 fused_ordering(598) 00:14:01.920 fused_ordering(599) 00:14:01.920 fused_ordering(600) 00:14:01.920 fused_ordering(601) 00:14:01.920 fused_ordering(602) 00:14:01.920 fused_ordering(603) 00:14:01.920 fused_ordering(604) 00:14:01.920 fused_ordering(605) 00:14:01.920 fused_ordering(606) 00:14:01.920 fused_ordering(607) 00:14:01.920 fused_ordering(608) 00:14:01.920 fused_ordering(609) 00:14:01.920 fused_ordering(610) 00:14:01.920 fused_ordering(611) 00:14:01.920 fused_ordering(612) 00:14:01.920 fused_ordering(613) 00:14:01.920 fused_ordering(614) 00:14:01.920 fused_ordering(615) 00:14:02.179 fused_ordering(616) 00:14:02.179 fused_ordering(617) 00:14:02.179 fused_ordering(618) 00:14:02.179 fused_ordering(619) 00:14:02.179 fused_ordering(620) 00:14:02.179 fused_ordering(621) 00:14:02.179 fused_ordering(622) 00:14:02.179 fused_ordering(623) 00:14:02.179 fused_ordering(624) 00:14:02.179 fused_ordering(625) 00:14:02.179 fused_ordering(626) 00:14:02.179 fused_ordering(627) 00:14:02.179 fused_ordering(628) 00:14:02.179 fused_ordering(629) 00:14:02.179 fused_ordering(630) 00:14:02.179 fused_ordering(631) 00:14:02.179 fused_ordering(632) 00:14:02.179 fused_ordering(633) 00:14:02.179 fused_ordering(634) 00:14:02.179 fused_ordering(635) 00:14:02.179 fused_ordering(636) 00:14:02.179 fused_ordering(637) 00:14:02.179 fused_ordering(638) 00:14:02.179 fused_ordering(639) 00:14:02.179 fused_ordering(640) 00:14:02.179 fused_ordering(641) 00:14:02.179 fused_ordering(642) 00:14:02.179 fused_ordering(643) 00:14:02.179 fused_ordering(644) 00:14:02.179 fused_ordering(645) 00:14:02.179 fused_ordering(646) 00:14:02.179 fused_ordering(647) 00:14:02.179 fused_ordering(648) 00:14:02.179 fused_ordering(649) 00:14:02.179 fused_ordering(650) 00:14:02.179 fused_ordering(651) 00:14:02.179 fused_ordering(652) 00:14:02.179 fused_ordering(653) 00:14:02.179 fused_ordering(654) 00:14:02.179 fused_ordering(655) 00:14:02.179 fused_ordering(656) 00:14:02.179 fused_ordering(657) 00:14:02.179 fused_ordering(658) 00:14:02.179 fused_ordering(659) 00:14:02.179 fused_ordering(660) 00:14:02.179 fused_ordering(661) 00:14:02.179 fused_ordering(662) 00:14:02.179 fused_ordering(663) 00:14:02.179 fused_ordering(664) 00:14:02.179 fused_ordering(665) 00:14:02.179 fused_ordering(666) 00:14:02.179 fused_ordering(667) 00:14:02.179 fused_ordering(668) 00:14:02.179 fused_ordering(669) 00:14:02.179 fused_ordering(670) 00:14:02.179 fused_ordering(671) 00:14:02.179 fused_ordering(672) 00:14:02.179 fused_ordering(673) 00:14:02.179 fused_ordering(674) 00:14:02.179 fused_ordering(675) 00:14:02.179 fused_ordering(676) 00:14:02.179 fused_ordering(677) 00:14:02.179 fused_ordering(678) 00:14:02.179 fused_ordering(679) 00:14:02.179 fused_ordering(680) 00:14:02.179 fused_ordering(681) 00:14:02.179 fused_ordering(682) 00:14:02.179 fused_ordering(683) 00:14:02.179 fused_ordering(684) 00:14:02.179 fused_ordering(685) 00:14:02.179 fused_ordering(686) 00:14:02.179 fused_ordering(687) 00:14:02.179 fused_ordering(688) 00:14:02.179 fused_ordering(689) 00:14:02.179 fused_ordering(690) 00:14:02.179 fused_ordering(691) 00:14:02.179 fused_ordering(692) 00:14:02.179 fused_ordering(693) 00:14:02.179 fused_ordering(694) 00:14:02.179 fused_ordering(695) 00:14:02.179 fused_ordering(696) 00:14:02.179 fused_ordering(697) 00:14:02.179 fused_ordering(698) 00:14:02.179 fused_ordering(699) 00:14:02.179 fused_ordering(700) 00:14:02.179 fused_ordering(701) 00:14:02.179 fused_ordering(702) 00:14:02.179 fused_ordering(703) 00:14:02.179 fused_ordering(704) 00:14:02.179 fused_ordering(705) 00:14:02.179 fused_ordering(706) 00:14:02.179 fused_ordering(707) 00:14:02.179 fused_ordering(708) 00:14:02.179 fused_ordering(709) 00:14:02.179 fused_ordering(710) 00:14:02.179 fused_ordering(711) 00:14:02.179 fused_ordering(712) 00:14:02.179 fused_ordering(713) 00:14:02.179 fused_ordering(714) 00:14:02.179 fused_ordering(715) 00:14:02.179 fused_ordering(716) 00:14:02.179 fused_ordering(717) 00:14:02.179 fused_ordering(718) 00:14:02.179 fused_ordering(719) 00:14:02.179 fused_ordering(720) 00:14:02.179 fused_ordering(721) 00:14:02.179 fused_ordering(722) 00:14:02.179 fused_ordering(723) 00:14:02.179 fused_ordering(724) 00:14:02.179 fused_ordering(725) 00:14:02.179 fused_ordering(726) 00:14:02.179 fused_ordering(727) 00:14:02.179 fused_ordering(728) 00:14:02.179 fused_ordering(729) 00:14:02.179 fused_ordering(730) 00:14:02.179 fused_ordering(731) 00:14:02.179 fused_ordering(732) 00:14:02.179 fused_ordering(733) 00:14:02.179 fused_ordering(734) 00:14:02.179 fused_ordering(735) 00:14:02.179 fused_ordering(736) 00:14:02.179 fused_ordering(737) 00:14:02.179 fused_ordering(738) 00:14:02.179 fused_ordering(739) 00:14:02.179 fused_ordering(740) 00:14:02.179 fused_ordering(741) 00:14:02.179 fused_ordering(742) 00:14:02.179 fused_ordering(743) 00:14:02.179 fused_ordering(744) 00:14:02.179 fused_ordering(745) 00:14:02.179 fused_ordering(746) 00:14:02.179 fused_ordering(747) 00:14:02.179 fused_ordering(748) 00:14:02.180 fused_ordering(749) 00:14:02.180 fused_ordering(750) 00:14:02.180 fused_ordering(751) 00:14:02.180 fused_ordering(752) 00:14:02.180 fused_ordering(753) 00:14:02.180 fused_ordering(754) 00:14:02.180 fused_ordering(755) 00:14:02.180 fused_ordering(756) 00:14:02.180 fused_ordering(757) 00:14:02.180 fused_ordering(758) 00:14:02.180 fused_ordering(759) 00:14:02.180 fused_ordering(760) 00:14:02.180 fused_ordering(761) 00:14:02.180 fused_ordering(762) 00:14:02.180 fused_ordering(763) 00:14:02.180 fused_ordering(764) 00:14:02.180 fused_ordering(765) 00:14:02.180 fused_ordering(766) 00:14:02.180 fused_ordering(767) 00:14:02.180 fused_ordering(768) 00:14:02.180 fused_ordering(769) 00:14:02.180 fused_ordering(770) 00:14:02.180 fused_ordering(771) 00:14:02.180 fused_ordering(772) 00:14:02.180 fused_ordering(773) 00:14:02.180 fused_ordering(774) 00:14:02.180 fused_ordering(775) 00:14:02.180 fused_ordering(776) 00:14:02.180 fused_ordering(777) 00:14:02.180 fused_ordering(778) 00:14:02.180 fused_ordering(779) 00:14:02.180 fused_ordering(780) 00:14:02.180 fused_ordering(781) 00:14:02.180 fused_ordering(782) 00:14:02.180 fused_ordering(783) 00:14:02.180 fused_ordering(784) 00:14:02.180 fused_ordering(785) 00:14:02.180 fused_ordering(786) 00:14:02.180 fused_ordering(787) 00:14:02.180 fused_ordering(788) 00:14:02.180 fused_ordering(789) 00:14:02.180 fused_ordering(790) 00:14:02.180 fused_ordering(791) 00:14:02.180 fused_ordering(792) 00:14:02.180 fused_ordering(793) 00:14:02.180 fused_ordering(794) 00:14:02.180 fused_ordering(795) 00:14:02.180 fused_ordering(796) 00:14:02.180 fused_ordering(797) 00:14:02.180 fused_ordering(798) 00:14:02.180 fused_ordering(799) 00:14:02.180 fused_ordering(800) 00:14:02.180 fused_ordering(801) 00:14:02.180 fused_ordering(802) 00:14:02.180 fused_ordering(803) 00:14:02.180 fused_ordering(804) 00:14:02.180 fused_ordering(805) 00:14:02.180 fused_ordering(806) 00:14:02.180 fused_ordering(807) 00:14:02.180 fused_ordering(808) 00:14:02.180 fused_ordering(809) 00:14:02.180 fused_ordering(810) 00:14:02.180 fused_ordering(811) 00:14:02.180 fused_ordering(812) 00:14:02.180 fused_ordering(813) 00:14:02.180 fused_ordering(814) 00:14:02.180 fused_ordering(815) 00:14:02.180 fused_ordering(816) 00:14:02.180 fused_ordering(817) 00:14:02.180 fused_ordering(818) 00:14:02.180 fused_ordering(819) 00:14:02.180 fused_ordering(820) 00:14:02.747 fused_ordering(821) 00:14:02.747 fused_ordering(822) 00:14:02.747 fused_ordering(823) 00:14:02.747 fused_ordering(824) 00:14:02.747 fused_ordering(825) 00:14:02.747 fused_ordering(826) 00:14:02.747 fused_ordering(827) 00:14:02.747 fused_ordering(828) 00:14:02.747 fused_ordering(829) 00:14:02.747 fused_ordering(830) 00:14:02.747 fused_ordering(831) 00:14:02.747 fused_ordering(832) 00:14:02.747 fused_ordering(833) 00:14:02.747 fused_ordering(834) 00:14:02.747 fused_ordering(835) 00:14:02.747 fused_ordering(836) 00:14:02.747 fused_ordering(837) 00:14:02.747 fused_ordering(838) 00:14:02.747 fused_ordering(839) 00:14:02.747 fused_ordering(840) 00:14:02.747 fused_ordering(841) 00:14:02.747 fused_ordering(842) 00:14:02.747 fused_ordering(843) 00:14:02.747 fused_ordering(844) 00:14:02.747 fused_ordering(845) 00:14:02.747 fused_ordering(846) 00:14:02.747 fused_ordering(847) 00:14:02.747 fused_ordering(848) 00:14:02.747 fused_ordering(849) 00:14:02.747 fused_ordering(850) 00:14:02.748 fused_ordering(851) 00:14:02.748 fused_ordering(852) 00:14:02.748 fused_ordering(853) 00:14:02.748 fused_ordering(854) 00:14:02.748 fused_ordering(855) 00:14:02.748 fused_ordering(856) 00:14:02.748 fused_ordering(857) 00:14:02.748 fused_ordering(858) 00:14:02.748 fused_ordering(859) 00:14:02.748 fused_ordering(860) 00:14:02.748 fused_ordering(861) 00:14:02.748 fused_ordering(862) 00:14:02.748 fused_ordering(863) 00:14:02.748 fused_ordering(864) 00:14:02.748 fused_ordering(865) 00:14:02.748 fused_ordering(866) 00:14:02.748 fused_ordering(867) 00:14:02.748 fused_ordering(868) 00:14:02.748 fused_ordering(869) 00:14:02.748 fused_ordering(870) 00:14:02.748 fused_ordering(871) 00:14:02.748 fused_ordering(872) 00:14:02.748 fused_ordering(873) 00:14:02.748 fused_ordering(874) 00:14:02.748 fused_ordering(875) 00:14:02.748 fused_ordering(876) 00:14:02.748 fused_ordering(877) 00:14:02.748 fused_ordering(878) 00:14:02.748 fused_ordering(879) 00:14:02.748 fused_ordering(880) 00:14:02.748 fused_ordering(881) 00:14:02.748 fused_ordering(882) 00:14:02.748 fused_ordering(883) 00:14:02.748 fused_ordering(884) 00:14:02.748 fused_ordering(885) 00:14:02.748 fused_ordering(886) 00:14:02.748 fused_ordering(887) 00:14:02.748 fused_ordering(888) 00:14:02.748 fused_ordering(889) 00:14:02.748 fused_ordering(890) 00:14:02.748 fused_ordering(891) 00:14:02.748 fused_ordering(892) 00:14:02.748 fused_ordering(893) 00:14:02.748 fused_ordering(894) 00:14:02.748 fused_ordering(895) 00:14:02.748 fused_ordering(896) 00:14:02.748 fused_ordering(897) 00:14:02.748 fused_ordering(898) 00:14:02.748 fused_ordering(899) 00:14:02.748 fused_ordering(900) 00:14:02.748 fused_ordering(901) 00:14:02.748 fused_ordering(902) 00:14:02.748 fused_ordering(903) 00:14:02.748 fused_ordering(904) 00:14:02.748 fused_ordering(905) 00:14:02.748 fused_ordering(906) 00:14:02.748 fused_ordering(907) 00:14:02.748 fused_ordering(908) 00:14:02.748 fused_ordering(909) 00:14:02.748 fused_ordering(910) 00:14:02.748 fused_ordering(911) 00:14:02.748 fused_ordering(912) 00:14:02.748 fused_ordering(913) 00:14:02.748 fused_ordering(914) 00:14:02.748 fused_ordering(915) 00:14:02.748 fused_ordering(916) 00:14:02.748 fused_ordering(917) 00:14:02.748 fused_ordering(918) 00:14:02.748 fused_ordering(919) 00:14:02.748 fused_ordering(920) 00:14:02.748 fused_ordering(921) 00:14:02.748 fused_ordering(922) 00:14:02.748 fused_ordering(923) 00:14:02.748 fused_ordering(924) 00:14:02.748 fused_ordering(925) 00:14:02.748 fused_ordering(926) 00:14:02.748 fused_ordering(927) 00:14:02.748 fused_ordering(928) 00:14:02.748 fused_ordering(929) 00:14:02.748 fused_ordering(930) 00:14:02.748 fused_ordering(931) 00:14:02.748 fused_ordering(932) 00:14:02.748 fused_ordering(933) 00:14:02.748 fused_ordering(934) 00:14:02.748 fused_ordering(935) 00:14:02.748 fused_ordering(936) 00:14:02.748 fused_ordering(937) 00:14:02.748 fused_ordering(938) 00:14:02.748 fused_ordering(939) 00:14:02.748 fused_ordering(940) 00:14:02.748 fused_ordering(941) 00:14:02.748 fused_ordering(942) 00:14:02.748 fused_ordering(943) 00:14:02.748 fused_ordering(944) 00:14:02.748 fused_ordering(945) 00:14:02.748 fused_ordering(946) 00:14:02.748 fused_ordering(947) 00:14:02.748 fused_ordering(948) 00:14:02.748 fused_ordering(949) 00:14:02.748 fused_ordering(950) 00:14:02.748 fused_ordering(951) 00:14:02.748 fused_ordering(952) 00:14:02.748 fused_ordering(953) 00:14:02.748 fused_ordering(954) 00:14:02.748 fused_ordering(955) 00:14:02.748 fused_ordering(956) 00:14:02.748 fused_ordering(957) 00:14:02.748 fused_ordering(958) 00:14:02.748 fused_ordering(959) 00:14:02.748 fused_ordering(960) 00:14:02.748 fused_ordering(961) 00:14:02.748 fused_ordering(962) 00:14:02.748 fused_ordering(963) 00:14:02.748 fused_ordering(964) 00:14:02.748 fused_ordering(965) 00:14:02.748 fused_ordering(966) 00:14:02.748 fused_ordering(967) 00:14:02.748 fused_ordering(968) 00:14:02.748 fused_ordering(969) 00:14:02.748 fused_ordering(970) 00:14:02.748 fused_ordering(971) 00:14:02.748 fused_ordering(972) 00:14:02.748 fused_ordering(973) 00:14:02.748 fused_ordering(974) 00:14:02.748 fused_ordering(975) 00:14:02.748 fused_ordering(976) 00:14:02.748 fused_ordering(977) 00:14:02.748 fused_ordering(978) 00:14:02.748 fused_ordering(979) 00:14:02.748 fused_ordering(980) 00:14:02.748 fused_ordering(981) 00:14:02.748 fused_ordering(982) 00:14:02.748 fused_ordering(983) 00:14:02.748 fused_ordering(984) 00:14:02.748 fused_ordering(985) 00:14:02.748 fused_ordering(986) 00:14:02.748 fused_ordering(987) 00:14:02.748 fused_ordering(988) 00:14:02.748 fused_ordering(989) 00:14:02.748 fused_ordering(990) 00:14:02.748 fused_ordering(991) 00:14:02.748 fused_ordering(992) 00:14:02.748 fused_ordering(993) 00:14:02.748 fused_ordering(994) 00:14:02.748 fused_ordering(995) 00:14:02.748 fused_ordering(996) 00:14:02.748 fused_ordering(997) 00:14:02.748 fused_ordering(998) 00:14:02.748 fused_ordering(999) 00:14:02.748 fused_ordering(1000) 00:14:02.748 fused_ordering(1001) 00:14:02.748 fused_ordering(1002) 00:14:02.748 fused_ordering(1003) 00:14:02.748 fused_ordering(1004) 00:14:02.748 fused_ordering(1005) 00:14:02.748 fused_ordering(1006) 00:14:02.749 fused_ordering(1007) 00:14:02.749 fused_ordering(1008) 00:14:02.749 fused_ordering(1009) 00:14:02.749 fused_ordering(1010) 00:14:02.749 fused_ordering(1011) 00:14:02.749 fused_ordering(1012) 00:14:02.749 fused_ordering(1013) 00:14:02.749 fused_ordering(1014) 00:14:02.749 fused_ordering(1015) 00:14:02.749 fused_ordering(1016) 00:14:02.749 fused_ordering(1017) 00:14:02.749 fused_ordering(1018) 00:14:02.749 fused_ordering(1019) 00:14:02.749 fused_ordering(1020) 00:14:02.749 fused_ordering(1021) 00:14:02.749 fused_ordering(1022) 00:14:02.749 fused_ordering(1023) 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:02.749 rmmod nvme_tcp 00:14:02.749 rmmod nvme_fabrics 00:14:02.749 rmmod nvme_keyring 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2960166 ']' 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2960166 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2960166 ']' 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2960166 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.749 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960166 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960166' 00:14:03.008 killing process with pid 2960166 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2960166 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2960166 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.008 15:31:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.545 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:05.545 00:14:05.546 real 0m10.679s 00:14:05.546 user 0m4.874s 00:14:05.546 sys 0m5.901s 00:14:05.546 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.546 15:31:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.546 ************************************ 00:14:05.546 END TEST nvmf_fused_ordering 00:14:05.546 ************************************ 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.546 ************************************ 00:14:05.546 START TEST nvmf_ns_masking 00:14:05.546 ************************************ 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:05.546 * Looking for test storage... 00:14:05.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:05.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.546 --rc genhtml_branch_coverage=1 00:14:05.546 --rc genhtml_function_coverage=1 00:14:05.546 --rc genhtml_legend=1 00:14:05.546 --rc geninfo_all_blocks=1 00:14:05.546 --rc geninfo_unexecuted_blocks=1 00:14:05.546 00:14:05.546 ' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:05.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.546 --rc genhtml_branch_coverage=1 00:14:05.546 --rc genhtml_function_coverage=1 00:14:05.546 --rc genhtml_legend=1 00:14:05.546 --rc geninfo_all_blocks=1 00:14:05.546 --rc geninfo_unexecuted_blocks=1 00:14:05.546 00:14:05.546 ' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:05.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.546 --rc genhtml_branch_coverage=1 00:14:05.546 --rc genhtml_function_coverage=1 00:14:05.546 --rc genhtml_legend=1 00:14:05.546 --rc geninfo_all_blocks=1 00:14:05.546 --rc geninfo_unexecuted_blocks=1 00:14:05.546 00:14:05.546 ' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:05.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.546 --rc genhtml_branch_coverage=1 00:14:05.546 --rc genhtml_function_coverage=1 00:14:05.546 --rc genhtml_legend=1 00:14:05.546 --rc geninfo_all_blocks=1 00:14:05.546 --rc geninfo_unexecuted_blocks=1 00:14:05.546 00:14:05.546 ' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.546 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=30659d48-815d-468a-9f22-e1d23c55eb5c 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=64683b01-30fa-4d0a-872b-63a3dd989d88 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=be5f4ef9-c372-47dd-a2c7-09219a8f2f25 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:05.547 15:31:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:12.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:12.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:12.116 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:12.117 Found net devices under 0000:86:00.0: cvl_0_0 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:12.117 Found net devices under 0000:86:00.1: cvl_0_1 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.117 15:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:12.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:14:12.117 00:14:12.117 --- 10.0.0.2 ping statistics --- 00:14:12.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.117 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:14:12.117 00:14:12.117 --- 10.0.0.1 ping statistics --- 00:14:12.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.117 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2964017 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2964017 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2964017 ']' 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.117 15:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.117 [2024-12-06 15:31:17.305113] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:14:12.117 [2024-12-06 15:31:17.305168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.117 [2024-12-06 15:31:17.383476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.117 [2024-12-06 15:31:17.421459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.117 [2024-12-06 15:31:17.421491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.117 [2024-12-06 15:31:17.421500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.117 [2024-12-06 15:31:17.421507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.117 [2024-12-06 15:31:17.421516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.117 [2024-12-06 15:31:17.422107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:12.376 [2024-12-06 15:31:18.335661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:12.376 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:12.635 Malloc1 00:14:12.635 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.894 Malloc2 00:14:12.894 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.159 15:31:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:13.422 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.422 [2024-12-06 15:31:19.370055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.422 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:13.423 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I be5f4ef9-c372-47dd-a2c7-09219a8f2f25 -a 10.0.0.2 -s 4420 -i 4 00:14:13.681 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.681 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:13.681 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.681 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:13.681 15:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:16.211 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.212 [ 0]:0x1 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc04f012e6bf4641b797c125cce48925 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc04f012e6bf4641b797c125cce48925 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.212 [ 0]:0x1 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc04f012e6bf4641b797c125cce48925 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc04f012e6bf4641b797c125cce48925 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.212 15:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:16.212 [ 1]:0x2 00:14:16.212 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.212 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.212 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17019b95b7b24002a7e40d7526382889 00:14:16.212 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17019b95b7b24002a7e40d7526382889 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.212 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:16.212 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.212 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.471 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I be5f4ef9-c372-47dd-a2c7-09219a8f2f25 -a 10.0.0.2 -s 4420 -i 4 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:16.731 15:31:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.267 [ 0]:0x2 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17019b95b7b24002a7e40d7526382889 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17019b95b7b24002a7e40d7526382889 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.267 15:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:19.267 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.268 [ 0]:0x1 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc04f012e6bf4641b797c125cce48925 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc04f012e6bf4641b797c125cce48925 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.268 [ 1]:0x2 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.268 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17019b95b7b24002a7e40d7526382889 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17019b95b7b24002a7e40d7526382889 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.527 [ 0]:0x2 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.527 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.785 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17019b95b7b24002a7e40d7526382889 00:14:19.785 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17019b95b7b24002a7e40d7526382889 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.785 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:19.785 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.785 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I be5f4ef9-c372-47dd-a2c7-09219a8f2f25 -a 10.0.0.2 -s 4420 -i 4 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:20.046 15:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.950 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.209 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.209 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.209 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:22.209 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.209 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.209 [ 0]:0x1 00:14:22.209 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.209 15:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc04f012e6bf4641b797c125cce48925 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc04f012e6bf4641b797c125cce48925 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.209 [ 1]:0x2 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17019b95b7b24002a7e40d7526382889 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17019b95b7b24002a7e40d7526382889 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.209 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.469 [ 0]:0x2 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17019b95b7b24002a7e40d7526382889 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17019b95b7b24002a7e40d7526382889 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:22.469 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:22.729 [2024-12-06 15:31:28.535794] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:22.729 request: 00:14:22.729 { 00:14:22.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.729 "nsid": 2, 00:14:22.729 "host": "nqn.2016-06.io.spdk:host1", 00:14:22.729 "method": "nvmf_ns_remove_host", 00:14:22.729 "req_id": 1 00:14:22.729 } 00:14:22.729 Got JSON-RPC error response 00:14:22.729 response: 00:14:22.729 { 00:14:22.729 "code": -32602, 00:14:22.729 "message": "Invalid parameters" 00:14:22.729 } 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.729 [ 0]:0x2 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=17019b95b7b24002a7e40d7526382889 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 17019b95b7b24002a7e40d7526382889 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2966016 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2966016 /var/tmp/host.sock 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2966016 ']' 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.729 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:22.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:22.730 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.730 15:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:22.989 [2024-12-06 15:31:28.759911] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:14:22.989 [2024-12-06 15:31:28.759959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966016 ] 00:14:22.989 [2024-12-06 15:31:28.833107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.989 [2024-12-06 15:31:28.873656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.248 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.248 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:23.248 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.506 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.506 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 30659d48-815d-468a-9f22-e1d23c55eb5c 00:14:23.506 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:23.506 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 30659D48815D468A9F22E1D23C55EB5C -i 00:14:23.764 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 64683b01-30fa-4d0a-872b-63a3dd989d88 00:14:23.764 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:23.764 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 64683B0130FA4D0A872B63A3DD989D88 -i 00:14:24.022 15:31:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.281 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:24.540 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:24.540 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:24.540 nvme0n1 00:14:24.800 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:24.800 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:25.059 nvme1n2 00:14:25.059 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:25.059 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:25.059 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:25.059 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:25.059 15:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:25.318 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:25.318 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:25.318 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:25.318 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:25.577 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 30659d48-815d-468a-9f22-e1d23c55eb5c == \3\0\6\5\9\d\4\8\-\8\1\5\d\-\4\6\8\a\-\9\f\2\2\-\e\1\d\2\3\c\5\5\e\b\5\c ]] 00:14:25.577 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:25.577 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:25.577 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:25.577 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 64683b01-30fa-4d0a-872b-63a3dd989d88 == \6\4\6\8\3\b\0\1\-\3\0\f\a\-\4\d\0\a\-\8\7\2\b\-\6\3\a\3\d\d\9\8\9\d\8\8 ]] 00:14:25.577 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.837 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 30659d48-815d-468a-9f22-e1d23c55eb5c 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30659D48815D468A9F22E1D23C55EB5C 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30659D48815D468A9F22E1D23C55EB5C 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:26.096 15:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30659D48815D468A9F22E1D23C55EB5C 00:14:26.355 [2024-12-06 15:31:32.101737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:26.355 [2024-12-06 15:31:32.101769] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:26.355 [2024-12-06 15:31:32.101780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:26.355 request: 00:14:26.355 { 00:14:26.355 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.355 "namespace": { 00:14:26.355 "bdev_name": "invalid", 00:14:26.355 "nsid": 1, 00:14:26.355 "nguid": "30659D48815D468A9F22E1D23C55EB5C", 00:14:26.355 "no_auto_visible": false, 00:14:26.355 "hide_metadata": false 00:14:26.355 }, 00:14:26.355 "method": "nvmf_subsystem_add_ns", 00:14:26.355 "req_id": 1 00:14:26.355 } 00:14:26.355 Got JSON-RPC error response 00:14:26.355 response: 00:14:26.355 { 00:14:26.355 "code": -32602, 00:14:26.355 "message": "Invalid parameters" 00:14:26.355 } 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 30659d48-815d-468a-9f22-e1d23c55eb5c 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 30659D48815D468A9F22E1D23C55EB5C -i 00:14:26.355 15:31:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2966016 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2966016 ']' 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2966016 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2966016 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2966016' 00:14:28.885 killing process with pid 2966016 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2966016 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2966016 00:14:28.885 15:31:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:29.154 rmmod nvme_tcp 00:14:29.154 rmmod nvme_fabrics 00:14:29.154 rmmod nvme_keyring 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2964017 ']' 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2964017 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2964017 ']' 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2964017 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.154 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2964017 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2964017' 00:14:29.413 killing process with pid 2964017 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2964017 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2964017 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.413 15:31:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:31.947 00:14:31.947 real 0m26.383s 00:14:31.947 user 0m31.568s 00:14:31.947 sys 0m7.002s 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.947 ************************************ 00:14:31.947 END TEST nvmf_ns_masking 00:14:31.947 ************************************ 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.947 ************************************ 00:14:31.947 START TEST nvmf_nvme_cli 00:14:31.947 ************************************ 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:31.947 * Looking for test storage... 00:14:31.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:31.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.947 --rc genhtml_branch_coverage=1 00:14:31.947 --rc genhtml_function_coverage=1 00:14:31.947 --rc genhtml_legend=1 00:14:31.947 --rc geninfo_all_blocks=1 00:14:31.947 --rc geninfo_unexecuted_blocks=1 00:14:31.947 00:14:31.947 ' 00:14:31.947 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:31.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.947 --rc genhtml_branch_coverage=1 00:14:31.947 --rc genhtml_function_coverage=1 00:14:31.947 --rc genhtml_legend=1 00:14:31.947 --rc geninfo_all_blocks=1 00:14:31.947 --rc geninfo_unexecuted_blocks=1 00:14:31.948 00:14:31.948 ' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.948 --rc genhtml_branch_coverage=1 00:14:31.948 --rc genhtml_function_coverage=1 00:14:31.948 --rc genhtml_legend=1 00:14:31.948 --rc geninfo_all_blocks=1 00:14:31.948 --rc geninfo_unexecuted_blocks=1 00:14:31.948 00:14:31.948 ' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.948 --rc genhtml_branch_coverage=1 00:14:31.948 --rc genhtml_function_coverage=1 00:14:31.948 --rc genhtml_legend=1 00:14:31.948 --rc geninfo_all_blocks=1 00:14:31.948 --rc geninfo_unexecuted_blocks=1 00:14:31.948 00:14:31.948 ' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.948 15:31:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:38.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:38.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:38.520 Found net devices under 0000:86:00.0: cvl_0_0 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.520 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:38.521 Found net devices under 0000:86:00.1: cvl_0_1 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:14:38.521 00:14:38.521 --- 10.0.0.2 ping statistics --- 00:14:38.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.521 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:14:38.521 00:14:38.521 --- 10.0.0.1 ping statistics --- 00:14:38.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.521 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2970734 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2970734 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2970734 ']' 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.521 15:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.521 [2024-12-06 15:31:43.780286] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:14:38.521 [2024-12-06 15:31:43.780333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.521 [2024-12-06 15:31:43.863196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.521 [2024-12-06 15:31:43.906201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.521 [2024-12-06 15:31:43.906238] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.521 [2024-12-06 15:31:43.906248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.521 [2024-12-06 15:31:43.906256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.521 [2024-12-06 15:31:43.906262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.521 [2024-12-06 15:31:43.907744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.521 [2024-12-06 15:31:43.907850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.521 [2024-12-06 15:31:43.907866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.522 [2024-12-06 15:31:43.907869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 [2024-12-06 15:31:44.671049] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 Malloc0 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 Malloc1 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.781 [2024-12-06 15:31:44.772134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.781 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:39.040 00:14:39.040 Discovery Log Number of Records 2, Generation counter 2 00:14:39.040 =====Discovery Log Entry 0====== 00:14:39.040 trtype: tcp 00:14:39.040 adrfam: ipv4 00:14:39.040 subtype: current discovery subsystem 00:14:39.040 treq: not required 00:14:39.040 portid: 0 00:14:39.040 trsvcid: 4420 00:14:39.040 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:39.040 traddr: 10.0.0.2 00:14:39.040 eflags: explicit discovery connections, duplicate discovery information 00:14:39.040 sectype: none 00:14:39.040 =====Discovery Log Entry 1====== 00:14:39.040 trtype: tcp 00:14:39.040 adrfam: ipv4 00:14:39.040 subtype: nvme subsystem 00:14:39.040 treq: not required 00:14:39.040 portid: 0 00:14:39.040 trsvcid: 4420 00:14:39.040 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:39.040 traddr: 10.0.0.2 00:14:39.040 eflags: none 00:14:39.040 sectype: none 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:39.040 15:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.416 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:40.416 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:40.416 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.416 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:40.416 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:40.416 15:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:42.318 /dev/nvme0n2 ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:42.318 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:42.319 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.319 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:42.319 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.319 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:42.319 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.319 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.319 rmmod nvme_tcp 00:14:42.578 rmmod nvme_fabrics 00:14:42.578 rmmod nvme_keyring 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2970734 ']' 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2970734 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2970734 ']' 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2970734 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2970734 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2970734' 00:14:42.578 killing process with pid 2970734 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2970734 00:14:42.578 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2970734 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.838 15:31:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.798 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:44.799 00:14:44.799 real 0m13.161s 00:14:44.799 user 0m20.649s 00:14:44.799 sys 0m5.132s 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:44.799 ************************************ 00:14:44.799 END TEST nvmf_nvme_cli 00:14:44.799 ************************************ 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.799 ************************************ 00:14:44.799 START TEST nvmf_vfio_user 00:14:44.799 ************************************ 00:14:44.799 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:45.091 * Looking for test storage... 00:14:45.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:45.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.091 --rc genhtml_branch_coverage=1 00:14:45.091 --rc genhtml_function_coverage=1 00:14:45.091 --rc genhtml_legend=1 00:14:45.091 --rc geninfo_all_blocks=1 00:14:45.091 --rc geninfo_unexecuted_blocks=1 00:14:45.091 00:14:45.091 ' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:45.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.091 --rc genhtml_branch_coverage=1 00:14:45.091 --rc genhtml_function_coverage=1 00:14:45.091 --rc genhtml_legend=1 00:14:45.091 --rc geninfo_all_blocks=1 00:14:45.091 --rc geninfo_unexecuted_blocks=1 00:14:45.091 00:14:45.091 ' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:45.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.091 --rc genhtml_branch_coverage=1 00:14:45.091 --rc genhtml_function_coverage=1 00:14:45.091 --rc genhtml_legend=1 00:14:45.091 --rc geninfo_all_blocks=1 00:14:45.091 --rc geninfo_unexecuted_blocks=1 00:14:45.091 00:14:45.091 ' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:45.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.091 --rc genhtml_branch_coverage=1 00:14:45.091 --rc genhtml_function_coverage=1 00:14:45.091 --rc genhtml_legend=1 00:14:45.091 --rc geninfo_all_blocks=1 00:14:45.091 --rc geninfo_unexecuted_blocks=1 00:14:45.091 00:14:45.091 ' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.091 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2972028 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2972028' 00:14:45.092 Process pid: 2972028 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2972028 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2972028 ']' 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.092 15:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:45.092 [2024-12-06 15:31:51.018968] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:14:45.092 [2024-12-06 15:31:51.019017] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.412 [2024-12-06 15:31:51.093011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.412 [2024-12-06 15:31:51.135979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.412 [2024-12-06 15:31:51.136016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.412 [2024-12-06 15:31:51.136024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.412 [2024-12-06 15:31:51.136030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.412 [2024-12-06 15:31:51.136039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.412 [2024-12-06 15:31:51.137618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.412 [2024-12-06 15:31:51.137724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.412 [2024-12-06 15:31:51.137833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.412 [2024-12-06 15:31:51.137833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.412 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.412 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:45.412 15:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:46.346 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:46.604 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:46.604 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:46.604 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.604 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:46.604 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:46.863 Malloc1 00:14:46.863 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:46.863 15:31:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:47.121 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:47.379 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.379 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:47.379 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:47.638 Malloc2 00:14:47.638 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:47.895 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:47.895 15:31:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:48.154 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:48.154 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:48.154 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.155 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:48.155 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:48.155 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:48.155 [2024-12-06 15:31:54.093916] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:14:48.155 [2024-12-06 15:31:54.093963] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972518 ] 00:14:48.155 [2024-12-06 15:31:54.135803] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:48.155 [2024-12-06 15:31:54.145653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:48.155 [2024-12-06 15:31:54.145674] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f504cf6c000 00:14:48.155 [2024-12-06 15:31:54.146645] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.155 [2024-12-06 15:31:54.147651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.155 [2024-12-06 15:31:54.148656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.155 [2024-12-06 15:31:54.149678] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.155 [2024-12-06 15:31:54.150670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.415 [2024-12-06 15:31:54.151676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.415 [2024-12-06 15:31:54.152677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.415 [2024-12-06 15:31:54.153682] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.415 [2024-12-06 15:31:54.154693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:48.415 [2024-12-06 15:31:54.154703] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f504cf61000 00:14:48.415 [2024-12-06 15:31:54.155619] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:48.415 [2024-12-06 15:31:54.168633] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:48.415 [2024-12-06 15:31:54.168665] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:48.415 [2024-12-06 15:31:54.173804] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:48.415 [2024-12-06 15:31:54.173839] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:48.415 [2024-12-06 15:31:54.173907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:48.415 [2024-12-06 15:31:54.173921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:48.415 [2024-12-06 15:31:54.173927] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:48.415 [2024-12-06 15:31:54.174799] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:48.415 [2024-12-06 15:31:54.174811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:48.415 [2024-12-06 15:31:54.174818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:48.415 [2024-12-06 15:31:54.175805] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:48.415 [2024-12-06 15:31:54.175812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:48.415 [2024-12-06 15:31:54.175819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:48.415 [2024-12-06 15:31:54.176810] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:48.415 [2024-12-06 15:31:54.176819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:48.415 [2024-12-06 15:31:54.177818] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:48.415 [2024-12-06 15:31:54.177826] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:48.415 [2024-12-06 15:31:54.177831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:48.415 [2024-12-06 15:31:54.177837] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:48.415 [2024-12-06 15:31:54.177945] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:48.415 [2024-12-06 15:31:54.177949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:48.416 [2024-12-06 15:31:54.177954] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:48.416 [2024-12-06 15:31:54.178825] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:48.416 [2024-12-06 15:31:54.179837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:48.416 [2024-12-06 15:31:54.180845] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:48.416 [2024-12-06 15:31:54.181845] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.416 [2024-12-06 15:31:54.181906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:48.416 [2024-12-06 15:31:54.182858] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:48.416 [2024-12-06 15:31:54.182866] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:48.416 [2024-12-06 15:31:54.182871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.182888] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:48.416 [2024-12-06 15:31:54.182895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.182914] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.416 [2024-12-06 15:31:54.182920] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.416 [2024-12-06 15:31:54.182923] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.416 [2024-12-06 15:31:54.182936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.182978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.182988] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:48.416 [2024-12-06 15:31:54.182994] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:48.416 [2024-12-06 15:31:54.182998] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:48.416 [2024-12-06 15:31:54.183002] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:48.416 [2024-12-06 15:31:54.183007] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:48.416 [2024-12-06 15:31:54.183011] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:48.416 [2024-12-06 15:31:54.183015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183032] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.416 [2024-12-06 15:31:54.183064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.416 [2024-12-06 15:31:54.183071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.416 [2024-12-06 15:31:54.183078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.416 [2024-12-06 15:31:54.183083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183114] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:48.416 [2024-12-06 15:31:54.183118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:48.416 [2024-12-06 15:31:54.183215] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:48.416 [2024-12-06 15:31:54.183218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.416 [2024-12-06 15:31:54.183223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183245] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:48.416 [2024-12-06 15:31:54.183253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.416 [2024-12-06 15:31:54.183271] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.416 [2024-12-06 15:31:54.183274] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.416 [2024-12-06 15:31:54.183279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183324] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.416 [2024-12-06 15:31:54.183328] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.416 [2024-12-06 15:31:54.183331] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.416 [2024-12-06 15:31:54.183336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183400] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:48.416 [2024-12-06 15:31:54.183404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:48.416 [2024-12-06 15:31:54.183409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:48.416 [2024-12-06 15:31:54.183425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:48.416 [2024-12-06 15:31:54.183502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:48.416 [2024-12-06 15:31:54.183516] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:48.417 [2024-12-06 15:31:54.183520] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:48.417 [2024-12-06 15:31:54.183523] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:48.417 [2024-12-06 15:31:54.183526] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:48.417 [2024-12-06 15:31:54.183529] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:48.417 [2024-12-06 15:31:54.183535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:48.417 [2024-12-06 15:31:54.183541] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:48.417 [2024-12-06 15:31:54.183545] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:48.417 [2024-12-06 15:31:54.183548] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.417 [2024-12-06 15:31:54.183553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:48.417 [2024-12-06 15:31:54.183561] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:48.417 [2024-12-06 15:31:54.183565] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.417 [2024-12-06 15:31:54.183568] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.417 [2024-12-06 15:31:54.183573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.417 [2024-12-06 15:31:54.183580] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:48.417 [2024-12-06 15:31:54.183584] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:48.417 [2024-12-06 15:31:54.183587] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.417 [2024-12-06 15:31:54.183592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:48.417 [2024-12-06 15:31:54.183598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:48.417 [2024-12-06 15:31:54.183608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:48.417 [2024-12-06 15:31:54.183617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:48.417 [2024-12-06 15:31:54.183623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:48.417 ===================================================== 00:14:48.417 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:48.417 ===================================================== 00:14:48.417 Controller Capabilities/Features 00:14:48.417 ================================ 00:14:48.417 Vendor ID: 4e58 00:14:48.417 Subsystem Vendor ID: 4e58 00:14:48.417 Serial Number: SPDK1 00:14:48.417 Model Number: SPDK bdev Controller 00:14:48.417 Firmware Version: 25.01 00:14:48.417 Recommended Arb Burst: 6 00:14:48.417 IEEE OUI Identifier: 8d 6b 50 00:14:48.417 Multi-path I/O 00:14:48.417 May have multiple subsystem ports: Yes 00:14:48.417 May have multiple controllers: Yes 00:14:48.417 Associated with SR-IOV VF: No 00:14:48.417 Max Data Transfer Size: 131072 00:14:48.417 Max Number of Namespaces: 32 00:14:48.417 Max Number of I/O Queues: 127 00:14:48.417 NVMe Specification Version (VS): 1.3 00:14:48.417 NVMe Specification Version (Identify): 1.3 00:14:48.417 Maximum Queue Entries: 256 00:14:48.417 Contiguous Queues Required: Yes 00:14:48.417 Arbitration Mechanisms Supported 00:14:48.417 Weighted Round Robin: Not Supported 00:14:48.417 Vendor Specific: Not Supported 00:14:48.417 Reset Timeout: 15000 ms 00:14:48.417 Doorbell Stride: 4 bytes 00:14:48.417 NVM Subsystem Reset: Not Supported 00:14:48.417 Command Sets Supported 00:14:48.417 NVM Command Set: Supported 00:14:48.417 Boot Partition: Not Supported 00:14:48.417 Memory Page Size Minimum: 4096 bytes 00:14:48.417 Memory Page Size Maximum: 4096 bytes 00:14:48.417 Persistent Memory Region: Not Supported 00:14:48.417 Optional Asynchronous Events Supported 00:14:48.417 Namespace Attribute Notices: Supported 00:14:48.417 Firmware Activation Notices: Not Supported 00:14:48.417 ANA Change Notices: Not Supported 00:14:48.417 PLE Aggregate Log Change Notices: Not Supported 00:14:48.417 LBA Status Info Alert Notices: Not Supported 00:14:48.417 EGE Aggregate Log Change Notices: Not Supported 00:14:48.417 Normal NVM Subsystem Shutdown event: Not Supported 00:14:48.417 Zone Descriptor Change Notices: Not Supported 00:14:48.417 Discovery Log Change Notices: Not Supported 00:14:48.417 Controller Attributes 00:14:48.417 128-bit Host Identifier: Supported 00:14:48.417 Non-Operational Permissive Mode: Not Supported 00:14:48.417 NVM Sets: Not Supported 00:14:48.417 Read Recovery Levels: Not Supported 00:14:48.417 Endurance Groups: Not Supported 00:14:48.417 Predictable Latency Mode: Not Supported 00:14:48.417 Traffic Based Keep ALive: Not Supported 00:14:48.417 Namespace Granularity: Not Supported 00:14:48.417 SQ Associations: Not Supported 00:14:48.417 UUID List: Not Supported 00:14:48.417 Multi-Domain Subsystem: Not Supported 00:14:48.417 Fixed Capacity Management: Not Supported 00:14:48.417 Variable Capacity Management: Not Supported 00:14:48.417 Delete Endurance Group: Not Supported 00:14:48.417 Delete NVM Set: Not Supported 00:14:48.417 Extended LBA Formats Supported: Not Supported 00:14:48.417 Flexible Data Placement Supported: Not Supported 00:14:48.417 00:14:48.417 Controller Memory Buffer Support 00:14:48.417 ================================ 00:14:48.417 Supported: No 00:14:48.417 00:14:48.417 Persistent Memory Region Support 00:14:48.417 ================================ 00:14:48.417 Supported: No 00:14:48.417 00:14:48.417 Admin Command Set Attributes 00:14:48.417 ============================ 00:14:48.417 Security Send/Receive: Not Supported 00:14:48.417 Format NVM: Not Supported 00:14:48.417 Firmware Activate/Download: Not Supported 00:14:48.417 Namespace Management: Not Supported 00:14:48.417 Device Self-Test: Not Supported 00:14:48.417 Directives: Not Supported 00:14:48.417 NVMe-MI: Not Supported 00:14:48.417 Virtualization Management: Not Supported 00:14:48.417 Doorbell Buffer Config: Not Supported 00:14:48.417 Get LBA Status Capability: Not Supported 00:14:48.417 Command & Feature Lockdown Capability: Not Supported 00:14:48.417 Abort Command Limit: 4 00:14:48.417 Async Event Request Limit: 4 00:14:48.417 Number of Firmware Slots: N/A 00:14:48.417 Firmware Slot 1 Read-Only: N/A 00:14:48.417 Firmware Activation Without Reset: N/A 00:14:48.417 Multiple Update Detection Support: N/A 00:14:48.417 Firmware Update Granularity: No Information Provided 00:14:48.417 Per-Namespace SMART Log: No 00:14:48.417 Asymmetric Namespace Access Log Page: Not Supported 00:14:48.417 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:48.417 Command Effects Log Page: Supported 00:14:48.417 Get Log Page Extended Data: Supported 00:14:48.417 Telemetry Log Pages: Not Supported 00:14:48.417 Persistent Event Log Pages: Not Supported 00:14:48.417 Supported Log Pages Log Page: May Support 00:14:48.417 Commands Supported & Effects Log Page: Not Supported 00:14:48.417 Feature Identifiers & Effects Log Page:May Support 00:14:48.417 NVMe-MI Commands & Effects Log Page: May Support 00:14:48.417 Data Area 4 for Telemetry Log: Not Supported 00:14:48.417 Error Log Page Entries Supported: 128 00:14:48.417 Keep Alive: Supported 00:14:48.417 Keep Alive Granularity: 10000 ms 00:14:48.417 00:14:48.417 NVM Command Set Attributes 00:14:48.417 ========================== 00:14:48.417 Submission Queue Entry Size 00:14:48.417 Max: 64 00:14:48.417 Min: 64 00:14:48.417 Completion Queue Entry Size 00:14:48.417 Max: 16 00:14:48.417 Min: 16 00:14:48.417 Number of Namespaces: 32 00:14:48.417 Compare Command: Supported 00:14:48.417 Write Uncorrectable Command: Not Supported 00:14:48.417 Dataset Management Command: Supported 00:14:48.417 Write Zeroes Command: Supported 00:14:48.417 Set Features Save Field: Not Supported 00:14:48.417 Reservations: Not Supported 00:14:48.417 Timestamp: Not Supported 00:14:48.417 Copy: Supported 00:14:48.417 Volatile Write Cache: Present 00:14:48.417 Atomic Write Unit (Normal): 1 00:14:48.417 Atomic Write Unit (PFail): 1 00:14:48.417 Atomic Compare & Write Unit: 1 00:14:48.417 Fused Compare & Write: Supported 00:14:48.417 Scatter-Gather List 00:14:48.417 SGL Command Set: Supported (Dword aligned) 00:14:48.417 SGL Keyed: Not Supported 00:14:48.417 SGL Bit Bucket Descriptor: Not Supported 00:14:48.417 SGL Metadata Pointer: Not Supported 00:14:48.417 Oversized SGL: Not Supported 00:14:48.417 SGL Metadata Address: Not Supported 00:14:48.417 SGL Offset: Not Supported 00:14:48.417 Transport SGL Data Block: Not Supported 00:14:48.417 Replay Protected Memory Block: Not Supported 00:14:48.417 00:14:48.417 Firmware Slot Information 00:14:48.417 ========================= 00:14:48.417 Active slot: 1 00:14:48.417 Slot 1 Firmware Revision: 25.01 00:14:48.417 00:14:48.417 00:14:48.417 Commands Supported and Effects 00:14:48.417 ============================== 00:14:48.417 Admin Commands 00:14:48.417 -------------- 00:14:48.417 Get Log Page (02h): Supported 00:14:48.417 Identify (06h): Supported 00:14:48.418 Abort (08h): Supported 00:14:48.418 Set Features (09h): Supported 00:14:48.418 Get Features (0Ah): Supported 00:14:48.418 Asynchronous Event Request (0Ch): Supported 00:14:48.418 Keep Alive (18h): Supported 00:14:48.418 I/O Commands 00:14:48.418 ------------ 00:14:48.418 Flush (00h): Supported LBA-Change 00:14:48.418 Write (01h): Supported LBA-Change 00:14:48.418 Read (02h): Supported 00:14:48.418 Compare (05h): Supported 00:14:48.418 Write Zeroes (08h): Supported LBA-Change 00:14:48.418 Dataset Management (09h): Supported LBA-Change 00:14:48.418 Copy (19h): Supported LBA-Change 00:14:48.418 00:14:48.418 Error Log 00:14:48.418 ========= 00:14:48.418 00:14:48.418 Arbitration 00:14:48.418 =========== 00:14:48.418 Arbitration Burst: 1 00:14:48.418 00:14:48.418 Power Management 00:14:48.418 ================ 00:14:48.418 Number of Power States: 1 00:14:48.418 Current Power State: Power State #0 00:14:48.418 Power State #0: 00:14:48.418 Max Power: 0.00 W 00:14:48.418 Non-Operational State: Operational 00:14:48.418 Entry Latency: Not Reported 00:14:48.418 Exit Latency: Not Reported 00:14:48.418 Relative Read Throughput: 0 00:14:48.418 Relative Read Latency: 0 00:14:48.418 Relative Write Throughput: 0 00:14:48.418 Relative Write Latency: 0 00:14:48.418 Idle Power: Not Reported 00:14:48.418 Active Power: Not Reported 00:14:48.418 Non-Operational Permissive Mode: Not Supported 00:14:48.418 00:14:48.418 Health Information 00:14:48.418 ================== 00:14:48.418 Critical Warnings: 00:14:48.418 Available Spare Space: OK 00:14:48.418 Temperature: OK 00:14:48.418 Device Reliability: OK 00:14:48.418 Read Only: No 00:14:48.418 Volatile Memory Backup: OK 00:14:48.418 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:48.418 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:48.418 Available Spare: 0% 00:14:48.418 Available Sp[2024-12-06 15:31:54.183705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:48.418 [2024-12-06 15:31:54.183712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:48.418 [2024-12-06 15:31:54.183740] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:48.418 [2024-12-06 15:31:54.183749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.418 [2024-12-06 15:31:54.183754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.418 [2024-12-06 15:31:54.183760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.418 [2024-12-06 15:31:54.183765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.418 [2024-12-06 15:31:54.183865] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:48.418 [2024-12-06 15:31:54.183874] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:48.418 [2024-12-06 15:31:54.184875] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.418 [2024-12-06 15:31:54.184926] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:48.418 [2024-12-06 15:31:54.184932] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:48.418 [2024-12-06 15:31:54.185873] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:48.418 [2024-12-06 15:31:54.185884] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:48.418 [2024-12-06 15:31:54.185938] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:48.418 [2024-12-06 15:31:54.186896] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:48.418 are Threshold: 0% 00:14:48.418 Life Percentage Used: 0% 00:14:48.418 Data Units Read: 0 00:14:48.418 Data Units Written: 0 00:14:48.418 Host Read Commands: 0 00:14:48.418 Host Write Commands: 0 00:14:48.418 Controller Busy Time: 0 minutes 00:14:48.418 Power Cycles: 0 00:14:48.418 Power On Hours: 0 hours 00:14:48.418 Unsafe Shutdowns: 0 00:14:48.418 Unrecoverable Media Errors: 0 00:14:48.418 Lifetime Error Log Entries: 0 00:14:48.418 Warning Temperature Time: 0 minutes 00:14:48.418 Critical Temperature Time: 0 minutes 00:14:48.418 00:14:48.418 Number of Queues 00:14:48.418 ================ 00:14:48.418 Number of I/O Submission Queues: 127 00:14:48.418 Number of I/O Completion Queues: 127 00:14:48.418 00:14:48.418 Active Namespaces 00:14:48.418 ================= 00:14:48.418 Namespace ID:1 00:14:48.418 Error Recovery Timeout: Unlimited 00:14:48.418 Command Set Identifier: NVM (00h) 00:14:48.418 Deallocate: Supported 00:14:48.418 Deallocated/Unwritten Error: Not Supported 00:14:48.418 Deallocated Read Value: Unknown 00:14:48.418 Deallocate in Write Zeroes: Not Supported 00:14:48.418 Deallocated Guard Field: 0xFFFF 00:14:48.418 Flush: Supported 00:14:48.418 Reservation: Supported 00:14:48.418 Namespace Sharing Capabilities: Multiple Controllers 00:14:48.418 Size (in LBAs): 131072 (0GiB) 00:14:48.418 Capacity (in LBAs): 131072 (0GiB) 00:14:48.418 Utilization (in LBAs): 131072 (0GiB) 00:14:48.418 NGUID: 2396B1DCE78546D995E958A0A1A87AE4 00:14:48.418 UUID: 2396b1dc-e785-46d9-95e9-58a0a1a87ae4 00:14:48.418 Thin Provisioning: Not Supported 00:14:48.418 Per-NS Atomic Units: Yes 00:14:48.418 Atomic Boundary Size (Normal): 0 00:14:48.418 Atomic Boundary Size (PFail): 0 00:14:48.418 Atomic Boundary Offset: 0 00:14:48.418 Maximum Single Source Range Length: 65535 00:14:48.418 Maximum Copy Length: 65535 00:14:48.418 Maximum Source Range Count: 1 00:14:48.418 NGUID/EUI64 Never Reused: No 00:14:48.418 Namespace Write Protected: No 00:14:48.418 Number of LBA Formats: 1 00:14:48.418 Current LBA Format: LBA Format #00 00:14:48.418 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:48.418 00:14:48.418 15:31:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:48.677 [2024-12-06 15:31:54.416236] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.942 Initializing NVMe Controllers 00:14:53.942 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.942 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:53.942 Initialization complete. Launching workers. 00:14:53.942 ======================================================== 00:14:53.942 Latency(us) 00:14:53.942 Device Information : IOPS MiB/s Average min max 00:14:53.942 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39892.09 155.83 3208.48 958.61 10612.74 00:14:53.942 ======================================================== 00:14:53.942 Total : 39892.09 155.83 3208.48 958.61 10612.74 00:14:53.942 00:14:53.942 [2024-12-06 15:31:59.440457] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.942 15:31:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:53.942 [2024-12-06 15:31:59.671528] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.212 Initializing NVMe Controllers 00:14:59.212 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:59.212 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:59.212 Initialization complete. Launching workers. 00:14:59.212 ======================================================== 00:14:59.212 Latency(us) 00:14:59.212 Device Information : IOPS MiB/s Average min max 00:14:59.212 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.29 62.65 7986.25 5995.89 15731.74 00:14:59.212 ======================================================== 00:14:59.212 Total : 16038.29 62.65 7986.25 5995.89 15731.74 00:14:59.212 00:14:59.212 [2024-12-06 15:32:04.715401] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.212 15:32:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:59.212 [2024-12-06 15:32:04.929456] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.499 [2024-12-06 15:32:10.017735] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.499 Initializing NVMe Controllers 00:15:04.499 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.499 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.499 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:04.499 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:04.499 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:04.499 Initialization complete. Launching workers. 00:15:04.499 Starting thread on core 2 00:15:04.499 Starting thread on core 3 00:15:04.499 Starting thread on core 1 00:15:04.499 15:32:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:04.499 [2024-12-06 15:32:10.313762] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.782 [2024-12-06 15:32:13.376040] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.782 Initializing NVMe Controllers 00:15:07.782 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.782 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.782 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:07.782 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:07.782 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:07.782 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:07.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:07.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:07.782 Initialization complete. Launching workers. 00:15:07.782 Starting thread on core 1 with urgent priority queue 00:15:07.782 Starting thread on core 2 with urgent priority queue 00:15:07.782 Starting thread on core 3 with urgent priority queue 00:15:07.782 Starting thread on core 0 with urgent priority queue 00:15:07.782 SPDK bdev Controller (SPDK1 ) core 0: 8694.67 IO/s 11.50 secs/100000 ios 00:15:07.782 SPDK bdev Controller (SPDK1 ) core 1: 7858.00 IO/s 12.73 secs/100000 ios 00:15:07.782 SPDK bdev Controller (SPDK1 ) core 2: 7348.67 IO/s 13.61 secs/100000 ios 00:15:07.782 SPDK bdev Controller (SPDK1 ) core 3: 9055.67 IO/s 11.04 secs/100000 ios 00:15:07.782 ======================================================== 00:15:07.782 00:15:07.782 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:07.782 [2024-12-06 15:32:13.670797] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.782 Initializing NVMe Controllers 00:15:07.782 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.782 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.782 Namespace ID: 1 size: 0GB 00:15:07.782 Initialization complete. 00:15:07.782 INFO: using host memory buffer for IO 00:15:07.782 Hello world! 00:15:07.782 [2024-12-06 15:32:13.705003] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.782 15:32:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:08.040 [2024-12-06 15:32:13.985319] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.414 Initializing NVMe Controllers 00:15:09.414 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.414 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.414 Initialization complete. Launching workers. 00:15:09.414 submit (in ns) avg, min, max = 6445.9, 3201.9, 3999094.3 00:15:09.414 complete (in ns) avg, min, max = 21964.7, 1764.8, 5991288.6 00:15:09.414 00:15:09.414 Submit histogram 00:15:09.414 ================ 00:15:09.414 Range in us Cumulative Count 00:15:09.414 3.200 - 3.215: 0.1592% ( 26) 00:15:09.414 3.215 - 3.230: 1.1693% ( 165) 00:15:09.414 3.230 - 3.246: 4.9100% ( 611) 00:15:09.414 3.246 - 3.261: 10.5424% ( 920) 00:15:09.414 3.261 - 3.276: 16.7871% ( 1020) 00:15:09.414 3.276 - 3.291: 23.5521% ( 1105) 00:15:09.414 3.291 - 3.307: 29.5641% ( 982) 00:15:09.414 3.307 - 3.322: 35.4537% ( 962) 00:15:09.414 3.322 - 3.337: 41.4106% ( 973) 00:15:09.414 3.337 - 3.352: 47.3124% ( 964) 00:15:09.414 3.352 - 3.368: 52.9448% ( 920) 00:15:09.414 3.368 - 3.383: 59.2996% ( 1038) 00:15:09.414 3.383 - 3.398: 66.8299% ( 1230) 00:15:09.414 3.398 - 3.413: 71.8195% ( 815) 00:15:09.414 3.413 - 3.429: 77.1764% ( 875) 00:15:09.414 3.429 - 3.444: 81.4008% ( 690) 00:15:09.414 3.444 - 3.459: 84.3517% ( 482) 00:15:09.414 3.459 - 3.474: 85.9434% ( 260) 00:15:09.414 3.474 - 3.490: 87.0393% ( 179) 00:15:09.414 3.490 - 3.505: 87.6209% ( 95) 00:15:09.414 3.505 - 3.520: 88.0495% ( 70) 00:15:09.414 3.520 - 3.535: 88.5454% ( 81) 00:15:09.414 3.535 - 3.550: 89.2800% ( 120) 00:15:09.414 3.550 - 3.566: 90.1433% ( 141) 00:15:09.414 3.566 - 3.581: 91.0371% ( 146) 00:15:09.414 3.581 - 3.596: 92.0779% ( 170) 00:15:09.414 3.596 - 3.611: 93.0819% ( 164) 00:15:09.414 3.611 - 3.627: 93.9941% ( 149) 00:15:09.414 3.627 - 3.642: 95.0227% ( 168) 00:15:09.414 3.642 - 3.657: 96.0512% ( 168) 00:15:09.414 3.657 - 3.672: 96.8226% ( 126) 00:15:09.414 3.672 - 3.688: 97.4226% ( 98) 00:15:09.414 3.688 - 3.703: 98.0287% ( 99) 00:15:09.414 3.703 - 3.718: 98.4266% ( 65) 00:15:09.414 3.718 - 3.733: 98.7021% ( 45) 00:15:09.414 3.733 - 3.749: 98.9592% ( 42) 00:15:09.414 3.749 - 3.764: 99.1735% ( 35) 00:15:09.414 3.764 - 3.779: 99.2898% ( 19) 00:15:09.414 3.779 - 3.794: 99.4123% ( 20) 00:15:09.414 3.794 - 3.810: 99.4674% ( 9) 00:15:09.414 3.810 - 3.825: 99.4919% ( 4) 00:15:09.414 3.825 - 3.840: 99.5163% ( 4) 00:15:09.414 3.840 - 3.855: 99.5286% ( 2) 00:15:09.414 3.855 - 3.870: 99.5408% ( 2) 00:15:09.414 3.870 - 3.886: 99.5531% ( 2) 00:15:09.414 3.886 - 3.901: 99.5592% ( 1) 00:15:09.414 3.901 - 3.931: 99.5837% ( 4) 00:15:09.414 3.931 - 3.962: 99.6082% ( 4) 00:15:09.414 3.962 - 3.992: 99.6204% ( 2) 00:15:09.414 3.992 - 4.023: 99.6265% ( 1) 00:15:09.414 4.023 - 4.053: 99.6388% ( 2) 00:15:09.414 4.114 - 4.145: 99.6449% ( 1) 00:15:09.414 4.267 - 4.297: 99.6510% ( 1) 00:15:09.414 4.998 - 5.029: 99.6572% ( 1) 00:15:09.414 5.059 - 5.090: 99.6694% ( 2) 00:15:09.414 5.090 - 5.120: 99.6755% ( 1) 00:15:09.414 5.181 - 5.211: 99.6878% ( 2) 00:15:09.414 5.211 - 5.242: 99.6939% ( 1) 00:15:09.414 5.333 - 5.364: 99.7061% ( 2) 00:15:09.414 5.455 - 5.486: 99.7123% ( 1) 00:15:09.414 5.486 - 5.516: 99.7184% ( 1) 00:15:09.414 5.516 - 5.547: 99.7367% ( 3) 00:15:09.414 5.577 - 5.608: 99.7429% ( 1) 00:15:09.414 5.608 - 5.638: 99.7490% ( 1) 00:15:09.414 5.699 - 5.730: 99.7551% ( 1) 00:15:09.414 5.730 - 5.760: 99.7612% ( 1) 00:15:09.414 5.760 - 5.790: 99.7735% ( 2) 00:15:09.414 5.790 - 5.821: 99.7796% ( 1) 00:15:09.414 5.821 - 5.851: 99.7918% ( 2) 00:15:09.414 5.851 - 5.882: 99.8041% ( 2) 00:15:09.414 5.882 - 5.912: 99.8102% ( 1) 00:15:09.414 5.912 - 5.943: 99.8225% ( 2) 00:15:09.414 6.065 - 6.095: 99.8286% ( 1) 00:15:09.414 6.126 - 6.156: 99.8347% ( 1) 00:15:09.414 6.156 - 6.187: 99.8408% ( 1) 00:15:09.414 6.248 - 6.278: 99.8531% ( 2) 00:15:09.414 6.278 - 6.309: 99.8592% ( 1) 00:15:09.414 6.309 - 6.339: 99.8653% ( 1) 00:15:09.414 6.430 - 6.461: 99.8714% ( 1) 00:15:09.414 [2024-12-06 15:32:15.007412] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.414 6.674 - 6.705: 99.8776% ( 1) 00:15:09.414 6.735 - 6.766: 99.8837% ( 1) 00:15:09.414 6.796 - 6.827: 99.8898% ( 1) 00:15:09.414 7.070 - 7.101: 99.9020% ( 2) 00:15:09.414 7.101 - 7.131: 99.9082% ( 1) 00:15:09.414 7.802 - 7.863: 99.9143% ( 1) 00:15:09.414 8.290 - 8.350: 99.9204% ( 1) 00:15:09.414 2059.703 - 2075.307: 99.9265% ( 1) 00:15:09.414 3994.575 - 4025.783: 100.0000% ( 12) 00:15:09.414 00:15:09.414 Complete histogram 00:15:09.414 ================== 00:15:09.414 Range in us Cumulative Count 00:15:09.414 1.760 - 1.768: 0.0367% ( 6) 00:15:09.414 1.768 - 1.775: 0.9918% ( 156) 00:15:09.414 1.775 - 1.783: 4.7018% ( 606) 00:15:09.414 1.783 - 1.790: 8.8649% ( 680) 00:15:09.414 1.790 - 1.798: 10.7200% ( 303) 00:15:09.414 1.798 - 1.806: 11.6261% ( 148) 00:15:09.414 1.806 - 1.813: 12.1954% ( 93) 00:15:09.414 1.813 - 1.821: 12.7648% ( 93) 00:15:09.414 1.821 - 1.829: 18.8503% ( 994) 00:15:09.414 1.829 - 1.836: 42.3534% ( 3839) 00:15:09.414 1.836 - 1.844: 70.4665% ( 4592) 00:15:09.414 1.844 - 1.851: 85.4965% ( 2455) 00:15:09.414 1.851 - 1.859: 90.7922% ( 865) 00:15:09.414 1.859 - 1.867: 93.2901% ( 408) 00:15:09.414 1.867 - 1.874: 94.8084% ( 248) 00:15:09.414 1.874 - 1.882: 95.4451% ( 104) 00:15:09.414 1.882 - 1.890: 95.8002% ( 58) 00:15:09.414 1.890 - 1.897: 96.0512% ( 41) 00:15:09.414 1.897 - 1.905: 96.6022% ( 90) 00:15:09.414 1.905 - 1.912: 97.5511% ( 155) 00:15:09.414 1.912 - 1.920: 98.3409% ( 129) 00:15:09.414 1.920 - 1.928: 98.7939% ( 74) 00:15:09.414 1.928 - 1.935: 99.0266% ( 38) 00:15:09.414 1.935 - 1.943: 99.1000% ( 12) 00:15:09.414 1.943 - 1.950: 99.1368% ( 6) 00:15:09.414 1.950 - 1.966: 99.1551% ( 3) 00:15:09.414 1.981 - 1.996: 99.1796% ( 4) 00:15:09.414 1.996 - 2.011: 99.2041% ( 4) 00:15:09.414 2.011 - 2.027: 99.2164% ( 2) 00:15:09.414 2.027 - 2.042: 99.2225% ( 1) 00:15:09.414 2.042 - 2.057: 99.2286% ( 1) 00:15:09.414 2.072 - 2.088: 99.2531% ( 4) 00:15:09.414 2.088 - 2.103: 99.2653% ( 2) 00:15:09.414 2.103 - 2.118: 99.2715% ( 1) 00:15:09.414 2.118 - 2.133: 99.3082% ( 6) 00:15:09.414 2.133 - 2.149: 99.3143% ( 1) 00:15:09.414 2.194 - 2.210: 99.3204% ( 1) 00:15:09.414 2.210 - 2.225: 99.3388% ( 3) 00:15:09.414 2.255 - 2.270: 99.3449% ( 1) 00:15:09.414 2.286 - 2.301: 99.3572% ( 2) 00:15:09.414 2.423 - 2.438: 99.3633% ( 1) 00:15:09.414 3.672 - 3.688: 99.3694% ( 1) 00:15:09.414 3.855 - 3.870: 99.3817% ( 2) 00:15:09.414 4.053 - 4.084: 99.3939% ( 2) 00:15:09.414 4.267 - 4.297: 99.4000% ( 1) 00:15:09.414 4.297 - 4.328: 99.4061% ( 1) 00:15:09.414 4.328 - 4.358: 99.4123% ( 1) 00:15:09.414 4.358 - 4.389: 99.4184% ( 1) 00:15:09.414 4.480 - 4.510: 99.4306% ( 2) 00:15:09.414 4.510 - 4.541: 99.4368% ( 1) 00:15:09.414 4.571 - 4.602: 99.4429% ( 1) 00:15:09.414 4.815 - 4.846: 99.4551% ( 2) 00:15:09.414 5.150 - 5.181: 99.4612% ( 1) 00:15:09.414 5.303 - 5.333: 99.4674% ( 1) 00:15:09.414 5.912 - 5.943: 99.4735% ( 1) 00:15:09.414 6.126 - 6.156: 99.4796% ( 1) 00:15:09.414 7.771 - 7.802: 99.4857% ( 1) 00:15:09.414 8.533 - 8.594: 99.4919% ( 1) 00:15:09.414 9.813 - 9.874: 99.4980% ( 1) 00:15:09.414 2200.137 - 2215.741: 99.5041% ( 1) 00:15:09.414 3978.971 - 3994.575: 99.5347% ( 5) 00:15:09.415 3994.575 - 4025.783: 99.9878% ( 74) 00:15:09.415 4993.219 - 5024.427: 99.9939% ( 1) 00:15:09.415 5960.655 - 5991.863: 100.0000% ( 1) 00:15:09.415 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:09.415 [ 00:15:09.415 { 00:15:09.415 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:09.415 "subtype": "Discovery", 00:15:09.415 "listen_addresses": [], 00:15:09.415 "allow_any_host": true, 00:15:09.415 "hosts": [] 00:15:09.415 }, 00:15:09.415 { 00:15:09.415 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:09.415 "subtype": "NVMe", 00:15:09.415 "listen_addresses": [ 00:15:09.415 { 00:15:09.415 "trtype": "VFIOUSER", 00:15:09.415 "adrfam": "IPv4", 00:15:09.415 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:09.415 "trsvcid": "0" 00:15:09.415 } 00:15:09.415 ], 00:15:09.415 "allow_any_host": true, 00:15:09.415 "hosts": [], 00:15:09.415 "serial_number": "SPDK1", 00:15:09.415 "model_number": "SPDK bdev Controller", 00:15:09.415 "max_namespaces": 32, 00:15:09.415 "min_cntlid": 1, 00:15:09.415 "max_cntlid": 65519, 00:15:09.415 "namespaces": [ 00:15:09.415 { 00:15:09.415 "nsid": 1, 00:15:09.415 "bdev_name": "Malloc1", 00:15:09.415 "name": "Malloc1", 00:15:09.415 "nguid": "2396B1DCE78546D995E958A0A1A87AE4", 00:15:09.415 "uuid": "2396b1dc-e785-46d9-95e9-58a0a1a87ae4" 00:15:09.415 } 00:15:09.415 ] 00:15:09.415 }, 00:15:09.415 { 00:15:09.415 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:09.415 "subtype": "NVMe", 00:15:09.415 "listen_addresses": [ 00:15:09.415 { 00:15:09.415 "trtype": "VFIOUSER", 00:15:09.415 "adrfam": "IPv4", 00:15:09.415 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:09.415 "trsvcid": "0" 00:15:09.415 } 00:15:09.415 ], 00:15:09.415 "allow_any_host": true, 00:15:09.415 "hosts": [], 00:15:09.415 "serial_number": "SPDK2", 00:15:09.415 "model_number": "SPDK bdev Controller", 00:15:09.415 "max_namespaces": 32, 00:15:09.415 "min_cntlid": 1, 00:15:09.415 "max_cntlid": 65519, 00:15:09.415 "namespaces": [ 00:15:09.415 { 00:15:09.415 "nsid": 1, 00:15:09.415 "bdev_name": "Malloc2", 00:15:09.415 "name": "Malloc2", 00:15:09.415 "nguid": "A5C770568EF947228B5C2FEB25E93A4F", 00:15:09.415 "uuid": "a5c77056-8ef9-4722-8b5c-2feb25e93a4f" 00:15:09.415 } 00:15:09.415 ] 00:15:09.415 } 00:15:09.415 ] 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2976119 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:09.415 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:09.673 [2024-12-06 15:32:15.425782] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.673 Malloc3 00:15:09.673 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:09.673 [2024-12-06 15:32:15.668575] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.932 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:09.932 Asynchronous Event Request test 00:15:09.932 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.932 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.932 Registering asynchronous event callbacks... 00:15:09.932 Starting namespace attribute notice tests for all controllers... 00:15:09.932 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:09.932 aer_cb - Changed Namespace 00:15:09.932 Cleaning up... 00:15:09.932 [ 00:15:09.932 { 00:15:09.932 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:09.932 "subtype": "Discovery", 00:15:09.932 "listen_addresses": [], 00:15:09.932 "allow_any_host": true, 00:15:09.932 "hosts": [] 00:15:09.932 }, 00:15:09.932 { 00:15:09.932 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:09.932 "subtype": "NVMe", 00:15:09.932 "listen_addresses": [ 00:15:09.932 { 00:15:09.932 "trtype": "VFIOUSER", 00:15:09.932 "adrfam": "IPv4", 00:15:09.932 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:09.932 "trsvcid": "0" 00:15:09.932 } 00:15:09.932 ], 00:15:09.932 "allow_any_host": true, 00:15:09.932 "hosts": [], 00:15:09.932 "serial_number": "SPDK1", 00:15:09.932 "model_number": "SPDK bdev Controller", 00:15:09.932 "max_namespaces": 32, 00:15:09.932 "min_cntlid": 1, 00:15:09.932 "max_cntlid": 65519, 00:15:09.932 "namespaces": [ 00:15:09.932 { 00:15:09.932 "nsid": 1, 00:15:09.932 "bdev_name": "Malloc1", 00:15:09.932 "name": "Malloc1", 00:15:09.932 "nguid": "2396B1DCE78546D995E958A0A1A87AE4", 00:15:09.932 "uuid": "2396b1dc-e785-46d9-95e9-58a0a1a87ae4" 00:15:09.932 }, 00:15:09.932 { 00:15:09.932 "nsid": 2, 00:15:09.932 "bdev_name": "Malloc3", 00:15:09.932 "name": "Malloc3", 00:15:09.932 "nguid": "C6E60A2B49604ED3963407AB35070763", 00:15:09.932 "uuid": "c6e60a2b-4960-4ed3-9634-07ab35070763" 00:15:09.932 } 00:15:09.932 ] 00:15:09.932 }, 00:15:09.932 { 00:15:09.932 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:09.932 "subtype": "NVMe", 00:15:09.932 "listen_addresses": [ 00:15:09.932 { 00:15:09.932 "trtype": "VFIOUSER", 00:15:09.932 "adrfam": "IPv4", 00:15:09.932 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:09.932 "trsvcid": "0" 00:15:09.932 } 00:15:09.932 ], 00:15:09.932 "allow_any_host": true, 00:15:09.932 "hosts": [], 00:15:09.932 "serial_number": "SPDK2", 00:15:09.932 "model_number": "SPDK bdev Controller", 00:15:09.932 "max_namespaces": 32, 00:15:09.932 "min_cntlid": 1, 00:15:09.932 "max_cntlid": 65519, 00:15:09.932 "namespaces": [ 00:15:09.932 { 00:15:09.932 "nsid": 1, 00:15:09.932 "bdev_name": "Malloc2", 00:15:09.932 "name": "Malloc2", 00:15:09.932 "nguid": "A5C770568EF947228B5C2FEB25E93A4F", 00:15:09.932 "uuid": "a5c77056-8ef9-4722-8b5c-2feb25e93a4f" 00:15:09.932 } 00:15:09.932 ] 00:15:09.932 } 00:15:09.932 ] 00:15:09.932 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2976119 00:15:09.932 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:09.932 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:09.932 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:09.932 15:32:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:09.933 [2024-12-06 15:32:15.899689] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:15:09.933 [2024-12-06 15:32:15.899717] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2976194 ] 00:15:10.194 [2024-12-06 15:32:15.936758] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:10.194 [2024-12-06 15:32:15.942014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:10.194 [2024-12-06 15:32:15.942037] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8f4bd55000 00:15:10.194 [2024-12-06 15:32:15.943019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.944024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.945039] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.946043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.947057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.948062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.949068] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.950072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.194 [2024-12-06 15:32:15.951083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:10.194 [2024-12-06 15:32:15.951093] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8f4bd4a000 00:15:10.194 [2024-12-06 15:32:15.952006] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:10.194 [2024-12-06 15:32:15.961369] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:10.194 [2024-12-06 15:32:15.961392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:10.194 [2024-12-06 15:32:15.966481] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:10.194 [2024-12-06 15:32:15.966519] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:10.194 [2024-12-06 15:32:15.966590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:10.194 [2024-12-06 15:32:15.966602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:10.194 [2024-12-06 15:32:15.966607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:10.194 [2024-12-06 15:32:15.967485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:10.194 [2024-12-06 15:32:15.967495] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:10.194 [2024-12-06 15:32:15.967501] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:10.194 [2024-12-06 15:32:15.968490] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:10.194 [2024-12-06 15:32:15.968499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:10.194 [2024-12-06 15:32:15.968508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:10.194 [2024-12-06 15:32:15.969502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:10.194 [2024-12-06 15:32:15.969511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:10.194 [2024-12-06 15:32:15.970516] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:10.194 [2024-12-06 15:32:15.970524] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:10.194 [2024-12-06 15:32:15.970529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:10.194 [2024-12-06 15:32:15.970535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:10.194 [2024-12-06 15:32:15.970643] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:10.194 [2024-12-06 15:32:15.970647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:10.194 [2024-12-06 15:32:15.970652] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:10.194 [2024-12-06 15:32:15.971530] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:10.194 [2024-12-06 15:32:15.972538] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:10.194 [2024-12-06 15:32:15.973546] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:10.194 [2024-12-06 15:32:15.974552] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.194 [2024-12-06 15:32:15.974590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:10.195 [2024-12-06 15:32:15.975566] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:10.195 [2024-12-06 15:32:15.975575] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:10.195 [2024-12-06 15:32:15.975580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:15.975596] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:10.195 [2024-12-06 15:32:15.975603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:15.975617] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:10.195 [2024-12-06 15:32:15.975622] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.195 [2024-12-06 15:32:15.975625] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.195 [2024-12-06 15:32:15.975638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:15.984375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:15.984386] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:10.195 [2024-12-06 15:32:15.984394] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:10.195 [2024-12-06 15:32:15.984398] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:10.195 [2024-12-06 15:32:15.984402] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:10.195 [2024-12-06 15:32:15.984407] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:10.195 [2024-12-06 15:32:15.984411] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:10.195 [2024-12-06 15:32:15.984416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:15.984422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:15.984432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:15.992372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:15.992384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.195 [2024-12-06 15:32:15.992391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.195 [2024-12-06 15:32:15.992399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.195 [2024-12-06 15:32:15.992406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.195 [2024-12-06 15:32:15.992410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:15.992419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:15.992427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.000373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.000381] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:10.195 [2024-12-06 15:32:16.000386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.000392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.000397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.000405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.008382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.008439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.008447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.008454] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:10.195 [2024-12-06 15:32:16.008458] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:10.195 [2024-12-06 15:32:16.008461] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.195 [2024-12-06 15:32:16.008467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.016372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.016382] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:10.195 [2024-12-06 15:32:16.016394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.016401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.016408] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:10.195 [2024-12-06 15:32:16.016412] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.195 [2024-12-06 15:32:16.016415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.195 [2024-12-06 15:32:16.016420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.024371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.024384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.024391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.024398] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:10.195 [2024-12-06 15:32:16.024402] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.195 [2024-12-06 15:32:16.024405] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.195 [2024-12-06 15:32:16.024411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.032374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.032383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.032389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.032397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.032404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.032411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.032416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.032421] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:10.195 [2024-12-06 15:32:16.032425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:10.195 [2024-12-06 15:32:16.032429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:10.195 [2024-12-06 15:32:16.032446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.039783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.039799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.047372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.047388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.055373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.055386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:10.195 [2024-12-06 15:32:16.063373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:10.195 [2024-12-06 15:32:16.063389] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:10.195 [2024-12-06 15:32:16.063393] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:10.195 [2024-12-06 15:32:16.063397] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:10.195 [2024-12-06 15:32:16.063400] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:10.195 [2024-12-06 15:32:16.063403] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:10.195 [2024-12-06 15:32:16.063409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:10.196 [2024-12-06 15:32:16.063416] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:10.196 [2024-12-06 15:32:16.063420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:10.196 [2024-12-06 15:32:16.063423] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.196 [2024-12-06 15:32:16.063429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:10.196 [2024-12-06 15:32:16.063435] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:10.196 [2024-12-06 15:32:16.063439] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.196 [2024-12-06 15:32:16.063442] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.196 [2024-12-06 15:32:16.063447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.196 [2024-12-06 15:32:16.063457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:10.196 [2024-12-06 15:32:16.063461] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:10.196 [2024-12-06 15:32:16.063464] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.196 [2024-12-06 15:32:16.063469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:10.196 [2024-12-06 15:32:16.071373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:10.196 [2024-12-06 15:32:16.071387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:10.196 [2024-12-06 15:32:16.071397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:10.196 [2024-12-06 15:32:16.071403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:10.196 ===================================================== 00:15:10.196 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:10.196 ===================================================== 00:15:10.196 Controller Capabilities/Features 00:15:10.196 ================================ 00:15:10.196 Vendor ID: 4e58 00:15:10.196 Subsystem Vendor ID: 4e58 00:15:10.196 Serial Number: SPDK2 00:15:10.196 Model Number: SPDK bdev Controller 00:15:10.196 Firmware Version: 25.01 00:15:10.196 Recommended Arb Burst: 6 00:15:10.196 IEEE OUI Identifier: 8d 6b 50 00:15:10.196 Multi-path I/O 00:15:10.196 May have multiple subsystem ports: Yes 00:15:10.196 May have multiple controllers: Yes 00:15:10.196 Associated with SR-IOV VF: No 00:15:10.196 Max Data Transfer Size: 131072 00:15:10.196 Max Number of Namespaces: 32 00:15:10.196 Max Number of I/O Queues: 127 00:15:10.196 NVMe Specification Version (VS): 1.3 00:15:10.196 NVMe Specification Version (Identify): 1.3 00:15:10.196 Maximum Queue Entries: 256 00:15:10.196 Contiguous Queues Required: Yes 00:15:10.196 Arbitration Mechanisms Supported 00:15:10.196 Weighted Round Robin: Not Supported 00:15:10.196 Vendor Specific: Not Supported 00:15:10.196 Reset Timeout: 15000 ms 00:15:10.196 Doorbell Stride: 4 bytes 00:15:10.196 NVM Subsystem Reset: Not Supported 00:15:10.196 Command Sets Supported 00:15:10.196 NVM Command Set: Supported 00:15:10.196 Boot Partition: Not Supported 00:15:10.196 Memory Page Size Minimum: 4096 bytes 00:15:10.196 Memory Page Size Maximum: 4096 bytes 00:15:10.196 Persistent Memory Region: Not Supported 00:15:10.196 Optional Asynchronous Events Supported 00:15:10.196 Namespace Attribute Notices: Supported 00:15:10.196 Firmware Activation Notices: Not Supported 00:15:10.196 ANA Change Notices: Not Supported 00:15:10.196 PLE Aggregate Log Change Notices: Not Supported 00:15:10.196 LBA Status Info Alert Notices: Not Supported 00:15:10.196 EGE Aggregate Log Change Notices: Not Supported 00:15:10.196 Normal NVM Subsystem Shutdown event: Not Supported 00:15:10.196 Zone Descriptor Change Notices: Not Supported 00:15:10.196 Discovery Log Change Notices: Not Supported 00:15:10.196 Controller Attributes 00:15:10.196 128-bit Host Identifier: Supported 00:15:10.196 Non-Operational Permissive Mode: Not Supported 00:15:10.196 NVM Sets: Not Supported 00:15:10.196 Read Recovery Levels: Not Supported 00:15:10.196 Endurance Groups: Not Supported 00:15:10.196 Predictable Latency Mode: Not Supported 00:15:10.196 Traffic Based Keep ALive: Not Supported 00:15:10.196 Namespace Granularity: Not Supported 00:15:10.196 SQ Associations: Not Supported 00:15:10.196 UUID List: Not Supported 00:15:10.196 Multi-Domain Subsystem: Not Supported 00:15:10.196 Fixed Capacity Management: Not Supported 00:15:10.196 Variable Capacity Management: Not Supported 00:15:10.196 Delete Endurance Group: Not Supported 00:15:10.196 Delete NVM Set: Not Supported 00:15:10.196 Extended LBA Formats Supported: Not Supported 00:15:10.196 Flexible Data Placement Supported: Not Supported 00:15:10.196 00:15:10.196 Controller Memory Buffer Support 00:15:10.196 ================================ 00:15:10.196 Supported: No 00:15:10.196 00:15:10.196 Persistent Memory Region Support 00:15:10.196 ================================ 00:15:10.196 Supported: No 00:15:10.196 00:15:10.196 Admin Command Set Attributes 00:15:10.196 ============================ 00:15:10.196 Security Send/Receive: Not Supported 00:15:10.196 Format NVM: Not Supported 00:15:10.196 Firmware Activate/Download: Not Supported 00:15:10.196 Namespace Management: Not Supported 00:15:10.196 Device Self-Test: Not Supported 00:15:10.196 Directives: Not Supported 00:15:10.196 NVMe-MI: Not Supported 00:15:10.196 Virtualization Management: Not Supported 00:15:10.196 Doorbell Buffer Config: Not Supported 00:15:10.196 Get LBA Status Capability: Not Supported 00:15:10.196 Command & Feature Lockdown Capability: Not Supported 00:15:10.196 Abort Command Limit: 4 00:15:10.196 Async Event Request Limit: 4 00:15:10.196 Number of Firmware Slots: N/A 00:15:10.196 Firmware Slot 1 Read-Only: N/A 00:15:10.196 Firmware Activation Without Reset: N/A 00:15:10.196 Multiple Update Detection Support: N/A 00:15:10.196 Firmware Update Granularity: No Information Provided 00:15:10.196 Per-Namespace SMART Log: No 00:15:10.196 Asymmetric Namespace Access Log Page: Not Supported 00:15:10.196 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:10.196 Command Effects Log Page: Supported 00:15:10.196 Get Log Page Extended Data: Supported 00:15:10.196 Telemetry Log Pages: Not Supported 00:15:10.196 Persistent Event Log Pages: Not Supported 00:15:10.196 Supported Log Pages Log Page: May Support 00:15:10.196 Commands Supported & Effects Log Page: Not Supported 00:15:10.196 Feature Identifiers & Effects Log Page:May Support 00:15:10.196 NVMe-MI Commands & Effects Log Page: May Support 00:15:10.196 Data Area 4 for Telemetry Log: Not Supported 00:15:10.196 Error Log Page Entries Supported: 128 00:15:10.196 Keep Alive: Supported 00:15:10.196 Keep Alive Granularity: 10000 ms 00:15:10.196 00:15:10.196 NVM Command Set Attributes 00:15:10.196 ========================== 00:15:10.196 Submission Queue Entry Size 00:15:10.196 Max: 64 00:15:10.196 Min: 64 00:15:10.196 Completion Queue Entry Size 00:15:10.196 Max: 16 00:15:10.196 Min: 16 00:15:10.196 Number of Namespaces: 32 00:15:10.196 Compare Command: Supported 00:15:10.196 Write Uncorrectable Command: Not Supported 00:15:10.196 Dataset Management Command: Supported 00:15:10.196 Write Zeroes Command: Supported 00:15:10.196 Set Features Save Field: Not Supported 00:15:10.196 Reservations: Not Supported 00:15:10.196 Timestamp: Not Supported 00:15:10.196 Copy: Supported 00:15:10.196 Volatile Write Cache: Present 00:15:10.196 Atomic Write Unit (Normal): 1 00:15:10.196 Atomic Write Unit (PFail): 1 00:15:10.196 Atomic Compare & Write Unit: 1 00:15:10.196 Fused Compare & Write: Supported 00:15:10.196 Scatter-Gather List 00:15:10.196 SGL Command Set: Supported (Dword aligned) 00:15:10.196 SGL Keyed: Not Supported 00:15:10.196 SGL Bit Bucket Descriptor: Not Supported 00:15:10.196 SGL Metadata Pointer: Not Supported 00:15:10.196 Oversized SGL: Not Supported 00:15:10.196 SGL Metadata Address: Not Supported 00:15:10.196 SGL Offset: Not Supported 00:15:10.196 Transport SGL Data Block: Not Supported 00:15:10.196 Replay Protected Memory Block: Not Supported 00:15:10.196 00:15:10.196 Firmware Slot Information 00:15:10.196 ========================= 00:15:10.196 Active slot: 1 00:15:10.196 Slot 1 Firmware Revision: 25.01 00:15:10.196 00:15:10.196 00:15:10.196 Commands Supported and Effects 00:15:10.196 ============================== 00:15:10.196 Admin Commands 00:15:10.196 -------------- 00:15:10.196 Get Log Page (02h): Supported 00:15:10.196 Identify (06h): Supported 00:15:10.196 Abort (08h): Supported 00:15:10.196 Set Features (09h): Supported 00:15:10.196 Get Features (0Ah): Supported 00:15:10.196 Asynchronous Event Request (0Ch): Supported 00:15:10.196 Keep Alive (18h): Supported 00:15:10.196 I/O Commands 00:15:10.196 ------------ 00:15:10.197 Flush (00h): Supported LBA-Change 00:15:10.197 Write (01h): Supported LBA-Change 00:15:10.197 Read (02h): Supported 00:15:10.197 Compare (05h): Supported 00:15:10.197 Write Zeroes (08h): Supported LBA-Change 00:15:10.197 Dataset Management (09h): Supported LBA-Change 00:15:10.197 Copy (19h): Supported LBA-Change 00:15:10.197 00:15:10.197 Error Log 00:15:10.197 ========= 00:15:10.197 00:15:10.197 Arbitration 00:15:10.197 =========== 00:15:10.197 Arbitration Burst: 1 00:15:10.197 00:15:10.197 Power Management 00:15:10.197 ================ 00:15:10.197 Number of Power States: 1 00:15:10.197 Current Power State: Power State #0 00:15:10.197 Power State #0: 00:15:10.197 Max Power: 0.00 W 00:15:10.197 Non-Operational State: Operational 00:15:10.197 Entry Latency: Not Reported 00:15:10.197 Exit Latency: Not Reported 00:15:10.197 Relative Read Throughput: 0 00:15:10.197 Relative Read Latency: 0 00:15:10.197 Relative Write Throughput: 0 00:15:10.197 Relative Write Latency: 0 00:15:10.197 Idle Power: Not Reported 00:15:10.197 Active Power: Not Reported 00:15:10.197 Non-Operational Permissive Mode: Not Supported 00:15:10.197 00:15:10.197 Health Information 00:15:10.197 ================== 00:15:10.197 Critical Warnings: 00:15:10.197 Available Spare Space: OK 00:15:10.197 Temperature: OK 00:15:10.197 Device Reliability: OK 00:15:10.197 Read Only: No 00:15:10.197 Volatile Memory Backup: OK 00:15:10.197 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:10.197 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:10.197 Available Spare: 0% 00:15:10.197 Available Sp[2024-12-06 15:32:16.071492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:10.197 [2024-12-06 15:32:16.079372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:10.197 [2024-12-06 15:32:16.079402] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:10.197 [2024-12-06 15:32:16.079411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.197 [2024-12-06 15:32:16.079416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.197 [2024-12-06 15:32:16.079422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.197 [2024-12-06 15:32:16.079428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.197 [2024-12-06 15:32:16.079470] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:10.197 [2024-12-06 15:32:16.079480] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:10.197 [2024-12-06 15:32:16.080468] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.197 [2024-12-06 15:32:16.080514] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:10.197 [2024-12-06 15:32:16.080520] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:10.197 [2024-12-06 15:32:16.081473] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:10.197 [2024-12-06 15:32:16.081484] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:10.197 [2024-12-06 15:32:16.081530] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:10.197 [2024-12-06 15:32:16.082496] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:10.197 are Threshold: 0% 00:15:10.197 Life Percentage Used: 0% 00:15:10.197 Data Units Read: 0 00:15:10.197 Data Units Written: 0 00:15:10.197 Host Read Commands: 0 00:15:10.197 Host Write Commands: 0 00:15:10.197 Controller Busy Time: 0 minutes 00:15:10.197 Power Cycles: 0 00:15:10.197 Power On Hours: 0 hours 00:15:10.197 Unsafe Shutdowns: 0 00:15:10.197 Unrecoverable Media Errors: 0 00:15:10.197 Lifetime Error Log Entries: 0 00:15:10.197 Warning Temperature Time: 0 minutes 00:15:10.197 Critical Temperature Time: 0 minutes 00:15:10.197 00:15:10.197 Number of Queues 00:15:10.197 ================ 00:15:10.197 Number of I/O Submission Queues: 127 00:15:10.197 Number of I/O Completion Queues: 127 00:15:10.197 00:15:10.197 Active Namespaces 00:15:10.197 ================= 00:15:10.197 Namespace ID:1 00:15:10.197 Error Recovery Timeout: Unlimited 00:15:10.197 Command Set Identifier: NVM (00h) 00:15:10.197 Deallocate: Supported 00:15:10.197 Deallocated/Unwritten Error: Not Supported 00:15:10.197 Deallocated Read Value: Unknown 00:15:10.197 Deallocate in Write Zeroes: Not Supported 00:15:10.197 Deallocated Guard Field: 0xFFFF 00:15:10.197 Flush: Supported 00:15:10.197 Reservation: Supported 00:15:10.197 Namespace Sharing Capabilities: Multiple Controllers 00:15:10.197 Size (in LBAs): 131072 (0GiB) 00:15:10.197 Capacity (in LBAs): 131072 (0GiB) 00:15:10.197 Utilization (in LBAs): 131072 (0GiB) 00:15:10.197 NGUID: A5C770568EF947228B5C2FEB25E93A4F 00:15:10.197 UUID: a5c77056-8ef9-4722-8b5c-2feb25e93a4f 00:15:10.197 Thin Provisioning: Not Supported 00:15:10.197 Per-NS Atomic Units: Yes 00:15:10.197 Atomic Boundary Size (Normal): 0 00:15:10.197 Atomic Boundary Size (PFail): 0 00:15:10.197 Atomic Boundary Offset: 0 00:15:10.197 Maximum Single Source Range Length: 65535 00:15:10.197 Maximum Copy Length: 65535 00:15:10.197 Maximum Source Range Count: 1 00:15:10.197 NGUID/EUI64 Never Reused: No 00:15:10.197 Namespace Write Protected: No 00:15:10.197 Number of LBA Formats: 1 00:15:10.197 Current LBA Format: LBA Format #00 00:15:10.197 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:10.197 00:15:10.197 15:32:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:10.455 [2024-12-06 15:32:16.311742] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.739 Initializing NVMe Controllers 00:15:15.739 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:15.739 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:15.739 Initialization complete. Launching workers. 00:15:15.739 ======================================================== 00:15:15.739 Latency(us) 00:15:15.739 Device Information : IOPS MiB/s Average min max 00:15:15.739 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39944.36 156.03 3204.29 971.77 6665.68 00:15:15.739 ======================================================== 00:15:15.739 Total : 39944.36 156.03 3204.29 971.77 6665.68 00:15:15.739 00:15:15.739 [2024-12-06 15:32:21.417636] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.739 15:32:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:15.739 [2024-12-06 15:32:21.650335] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.006 Initializing NVMe Controllers 00:15:21.006 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.006 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:21.006 Initialization complete. Launching workers. 00:15:21.006 ======================================================== 00:15:21.006 Latency(us) 00:15:21.006 Device Information : IOPS MiB/s Average min max 00:15:21.006 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39951.06 156.06 3203.75 972.91 6626.37 00:15:21.006 ======================================================== 00:15:21.006 Total : 39951.06 156.06 3203.75 972.91 6626.37 00:15:21.006 00:15:21.006 [2024-12-06 15:32:26.672972] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.006 15:32:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:21.006 [2024-12-06 15:32:26.875741] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.266 [2024-12-06 15:32:32.024475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.266 Initializing NVMe Controllers 00:15:26.266 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.266 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.266 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:26.266 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:26.266 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:26.266 Initialization complete. Launching workers. 00:15:26.266 Starting thread on core 2 00:15:26.266 Starting thread on core 3 00:15:26.266 Starting thread on core 1 00:15:26.266 15:32:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:26.524 [2024-12-06 15:32:32.314358] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.810 [2024-12-06 15:32:35.378665] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.810 Initializing NVMe Controllers 00:15:29.810 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.810 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:29.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:29.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:29.810 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:29.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:29.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:29.810 Initialization complete. Launching workers. 00:15:29.810 Starting thread on core 1 with urgent priority queue 00:15:29.810 Starting thread on core 2 with urgent priority queue 00:15:29.810 Starting thread on core 3 with urgent priority queue 00:15:29.810 Starting thread on core 0 with urgent priority queue 00:15:29.810 SPDK bdev Controller (SPDK2 ) core 0: 9523.00 IO/s 10.50 secs/100000 ios 00:15:29.810 SPDK bdev Controller (SPDK2 ) core 1: 8773.33 IO/s 11.40 secs/100000 ios 00:15:29.810 SPDK bdev Controller (SPDK2 ) core 2: 10248.67 IO/s 9.76 secs/100000 ios 00:15:29.810 SPDK bdev Controller (SPDK2 ) core 3: 9605.33 IO/s 10.41 secs/100000 ios 00:15:29.810 ======================================================== 00:15:29.810 00:15:29.810 15:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:29.810 [2024-12-06 15:32:35.667766] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.810 Initializing NVMe Controllers 00:15:29.810 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.810 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.810 Namespace ID: 1 size: 0GB 00:15:29.810 Initialization complete. 00:15:29.810 INFO: using host memory buffer for IO 00:15:29.810 Hello world! 00:15:29.810 [2024-12-06 15:32:35.677832] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.810 15:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:30.068 [2024-12-06 15:32:35.959188] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.442 Initializing NVMe Controllers 00:15:31.442 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.442 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.442 Initialization complete. Launching workers. 00:15:31.442 submit (in ns) avg, min, max = 5530.7, 3183.8, 3997999.0 00:15:31.442 complete (in ns) avg, min, max = 20528.2, 1758.1, 3998404.8 00:15:31.442 00:15:31.442 Submit histogram 00:15:31.442 ================ 00:15:31.442 Range in us Cumulative Count 00:15:31.442 3.170 - 3.185: 0.0060% ( 1) 00:15:31.442 3.185 - 3.200: 0.2283% ( 37) 00:15:31.442 3.200 - 3.215: 1.9646% ( 289) 00:15:31.442 3.215 - 3.230: 6.5545% ( 764) 00:15:31.442 3.230 - 3.246: 11.3067% ( 791) 00:15:31.442 3.246 - 3.261: 17.0201% ( 951) 00:15:31.442 3.261 - 3.276: 23.6227% ( 1099) 00:15:31.442 3.276 - 3.291: 29.7086% ( 1013) 00:15:31.442 3.291 - 3.307: 35.9267% ( 1035) 00:15:31.442 3.307 - 3.322: 42.2109% ( 1046) 00:15:31.442 3.322 - 3.337: 47.7381% ( 920) 00:15:31.442 3.337 - 3.352: 52.7125% ( 828) 00:15:31.442 3.352 - 3.368: 58.4260% ( 951) 00:15:31.442 3.368 - 3.383: 66.0439% ( 1268) 00:15:31.442 3.383 - 3.398: 71.2947% ( 874) 00:15:31.442 3.398 - 3.413: 76.4674% ( 861) 00:15:31.442 3.413 - 3.429: 80.8771% ( 734) 00:15:31.442 3.429 - 3.444: 83.6347% ( 459) 00:15:31.442 3.444 - 3.459: 85.5512% ( 319) 00:15:31.442 3.459 - 3.474: 86.7768% ( 204) 00:15:31.442 3.474 - 3.490: 87.3355% ( 93) 00:15:31.442 3.490 - 3.505: 87.8041% ( 78) 00:15:31.442 3.505 - 3.520: 88.4410% ( 106) 00:15:31.442 3.520 - 3.535: 89.0658% ( 104) 00:15:31.442 3.535 - 3.550: 89.9910% ( 154) 00:15:31.442 3.550 - 3.566: 90.9042% ( 152) 00:15:31.442 3.566 - 3.581: 92.0096% ( 184) 00:15:31.442 3.581 - 3.596: 93.0790% ( 178) 00:15:31.442 3.596 - 3.611: 94.0102% ( 155) 00:15:31.442 3.611 - 3.627: 94.9895% ( 163) 00:15:31.442 3.627 - 3.642: 95.8847% ( 149) 00:15:31.442 3.642 - 3.657: 96.6657% ( 130) 00:15:31.442 3.657 - 3.672: 97.3686% ( 117) 00:15:31.442 3.672 - 3.688: 98.0174% ( 108) 00:15:31.442 3.688 - 3.703: 98.4260% ( 68) 00:15:31.442 3.703 - 3.718: 98.8105% ( 64) 00:15:31.442 3.718 - 3.733: 99.0868% ( 46) 00:15:31.442 3.733 - 3.749: 99.2430% ( 26) 00:15:31.442 3.749 - 3.764: 99.3992% ( 26) 00:15:31.442 3.764 - 3.779: 99.4773% ( 13) 00:15:31.442 3.779 - 3.794: 99.5554% ( 13) 00:15:31.442 3.794 - 3.810: 99.5734% ( 3) 00:15:31.442 3.810 - 3.825: 99.5975% ( 4) 00:15:31.442 3.825 - 3.840: 99.6155% ( 3) 00:15:31.442 3.840 - 3.855: 99.6215% ( 1) 00:15:31.442 3.855 - 3.870: 99.6275% ( 1) 00:15:31.442 3.870 - 3.886: 99.6395% ( 2) 00:15:31.442 3.886 - 3.901: 99.6455% ( 1) 00:15:31.442 3.931 - 3.962: 99.6576% ( 2) 00:15:31.442 3.962 - 3.992: 99.6636% ( 1) 00:15:31.442 3.992 - 4.023: 99.6696% ( 1) 00:15:31.442 4.023 - 4.053: 99.6816% ( 2) 00:15:31.442 4.053 - 4.084: 99.6876% ( 1) 00:15:31.442 4.084 - 4.114: 99.6996% ( 2) 00:15:31.442 4.175 - 4.206: 99.7056% ( 1) 00:15:31.442 4.328 - 4.358: 99.7116% ( 1) 00:15:31.442 4.846 - 4.876: 99.7176% ( 1) 00:15:31.442 4.998 - 5.029: 99.7236% ( 1) 00:15:31.442 5.181 - 5.211: 99.7296% ( 1) 00:15:31.442 5.455 - 5.486: 99.7357% ( 1) 00:15:31.442 5.669 - 5.699: 99.7417% ( 1) 00:15:31.442 5.790 - 5.821: 99.7477% ( 1) 00:15:31.442 5.821 - 5.851: 99.7597% ( 2) 00:15:31.442 5.882 - 5.912: 99.7657% ( 1) 00:15:31.442 5.912 - 5.943: 99.7717% ( 1) 00:15:31.442 6.004 - 6.034: 99.7777% ( 1) 00:15:31.442 6.034 - 6.065: 99.7837% ( 1) 00:15:31.442 6.095 - 6.126: 99.7957% ( 2) 00:15:31.442 6.126 - 6.156: 99.8017% ( 1) 00:15:31.442 6.156 - 6.187: 99.8138% ( 2) 00:15:31.442 6.278 - 6.309: 99.8198% ( 1) 00:15:31.442 6.309 - 6.339: 99.8258% ( 1) 00:15:31.442 6.339 - 6.370: 99.8378% ( 2) 00:15:31.442 6.370 - 6.400: 99.8438% ( 1) 00:15:31.442 6.461 - 6.491: 99.8498% ( 1) 00:15:31.442 6.491 - 6.522: 99.8558% ( 1) 00:15:31.442 6.552 - 6.583: 99.8618% ( 1) 00:15:31.442 6.644 - 6.674: 99.8678% ( 1) 00:15:31.442 6.674 - 6.705: 99.8738% ( 1) 00:15:31.442 [2024-12-06 15:32:37.054564] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.442 6.735 - 6.766: 99.8798% ( 1) 00:15:31.442 6.766 - 6.796: 99.8859% ( 1) 00:15:31.442 6.949 - 6.979: 99.8919% ( 1) 00:15:31.442 7.101 - 7.131: 99.8979% ( 1) 00:15:31.442 7.131 - 7.162: 99.9039% ( 1) 00:15:31.442 7.314 - 7.345: 99.9159% ( 2) 00:15:31.442 7.375 - 7.406: 99.9219% ( 1) 00:15:31.442 7.528 - 7.558: 99.9279% ( 1) 00:15:31.442 7.863 - 7.924: 99.9339% ( 1) 00:15:31.442 9.813 - 9.874: 99.9399% ( 1) 00:15:31.442 16.213 - 16.335: 99.9459% ( 1) 00:15:31.442 3994.575 - 4025.783: 100.0000% ( 9) 00:15:31.442 00:15:31.442 Complete histogram 00:15:31.443 ================== 00:15:31.443 Range in us Cumulative Count 00:15:31.443 1.752 - 1.760: 0.0120% ( 2) 00:15:31.443 1.760 - 1.768: 0.6849% ( 112) 00:15:31.443 1.768 - 1.775: 9.7747% ( 1513) 00:15:31.443 1.775 - 1.783: 40.8711% ( 5176) 00:15:31.443 1.783 - 1.790: 69.8408% ( 4822) 00:15:31.443 1.790 - 1.798: 80.3845% ( 1755) 00:15:31.443 1.798 - 1.806: 84.1874% ( 633) 00:15:31.443 1.806 - 1.813: 86.8429% ( 442) 00:15:31.443 1.813 - 1.821: 89.6966% ( 475) 00:15:31.443 1.821 - 1.829: 92.8387% ( 523) 00:15:31.443 1.829 - 1.836: 94.8994% ( 343) 00:15:31.443 1.836 - 1.844: 95.8726% ( 162) 00:15:31.443 1.844 - 1.851: 96.7318% ( 143) 00:15:31.443 1.851 - 1.859: 97.4947% ( 127) 00:15:31.443 1.859 - 1.867: 98.1135% ( 103) 00:15:31.443 1.867 - 1.874: 98.5641% ( 75) 00:15:31.443 1.874 - 1.882: 98.8525% ( 48) 00:15:31.443 1.882 - 1.890: 98.9486% ( 16) 00:15:31.443 1.890 - 1.897: 99.0087% ( 10) 00:15:31.443 1.897 - 1.905: 99.0448% ( 6) 00:15:31.443 1.905 - 1.912: 99.0808% ( 6) 00:15:31.443 1.912 - 1.920: 99.1108% ( 5) 00:15:31.443 1.920 - 1.928: 99.1529% ( 7) 00:15:31.443 1.928 - 1.935: 99.1649% ( 2) 00:15:31.443 1.935 - 1.943: 99.1769% ( 2) 00:15:31.443 1.943 - 1.950: 99.1889% ( 2) 00:15:31.443 1.950 - 1.966: 99.2010% ( 2) 00:15:31.443 1.966 - 1.981: 99.2130% ( 2) 00:15:31.443 1.981 - 1.996: 99.2370% ( 4) 00:15:31.443 1.996 - 2.011: 99.2490% ( 2) 00:15:31.443 2.011 - 2.027: 99.2550% ( 1) 00:15:31.443 2.027 - 2.042: 99.2731% ( 3) 00:15:31.443 2.042 - 2.057: 99.2911% ( 3) 00:15:31.443 2.072 - 2.088: 99.3031% ( 2) 00:15:31.443 2.103 - 2.118: 99.3091% ( 1) 00:15:31.443 2.149 - 2.164: 99.3211% ( 2) 00:15:31.443 2.194 - 2.210: 99.3331% ( 2) 00:15:31.443 2.210 - 2.225: 99.3451% ( 2) 00:15:31.443 2.255 - 2.270: 99.3512% ( 1) 00:15:31.443 2.301 - 2.316: 99.3572% ( 1) 00:15:31.443 2.347 - 2.362: 99.3632% ( 1) 00:15:31.443 3.596 - 3.611: 99.3692% ( 1) 00:15:31.443 3.931 - 3.962: 99.3752% ( 1) 00:15:31.443 4.084 - 4.114: 99.3812% ( 1) 00:15:31.443 4.114 - 4.145: 99.3932% ( 2) 00:15:31.443 4.175 - 4.206: 99.3992% ( 1) 00:15:31.443 4.267 - 4.297: 99.4052% ( 1) 00:15:31.443 4.297 - 4.328: 99.4172% ( 2) 00:15:31.443 4.450 - 4.480: 99.4233% ( 1) 00:15:31.443 4.602 - 4.632: 99.4353% ( 2) 00:15:31.443 4.754 - 4.785: 99.4473% ( 2) 00:15:31.443 4.937 - 4.968: 99.4533% ( 1) 00:15:31.443 5.029 - 5.059: 99.4593% ( 1) 00:15:31.443 5.090 - 5.120: 99.4653% ( 1) 00:15:31.443 5.242 - 5.272: 99.4773% ( 2) 00:15:31.443 5.364 - 5.394: 99.4833% ( 1) 00:15:31.443 5.699 - 5.730: 99.4893% ( 1) 00:15:31.443 5.790 - 5.821: 99.4953% ( 1) 00:15:31.443 6.156 - 6.187: 99.5014% ( 1) 00:15:31.443 6.552 - 6.583: 99.5074% ( 1) 00:15:31.443 9.143 - 9.204: 99.5134% ( 1) 00:15:31.443 27.916 - 28.038: 99.5194% ( 1) 00:15:31.443 39.010 - 39.253: 99.5254% ( 1) 00:15:31.443 189.196 - 190.171: 99.5314% ( 1) 00:15:31.443 3994.575 - 4025.783: 100.0000% ( 78) 00:15:31.443 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:31.443 [ 00:15:31.443 { 00:15:31.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:31.443 "subtype": "Discovery", 00:15:31.443 "listen_addresses": [], 00:15:31.443 "allow_any_host": true, 00:15:31.443 "hosts": [] 00:15:31.443 }, 00:15:31.443 { 00:15:31.443 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:31.443 "subtype": "NVMe", 00:15:31.443 "listen_addresses": [ 00:15:31.443 { 00:15:31.443 "trtype": "VFIOUSER", 00:15:31.443 "adrfam": "IPv4", 00:15:31.443 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:31.443 "trsvcid": "0" 00:15:31.443 } 00:15:31.443 ], 00:15:31.443 "allow_any_host": true, 00:15:31.443 "hosts": [], 00:15:31.443 "serial_number": "SPDK1", 00:15:31.443 "model_number": "SPDK bdev Controller", 00:15:31.443 "max_namespaces": 32, 00:15:31.443 "min_cntlid": 1, 00:15:31.443 "max_cntlid": 65519, 00:15:31.443 "namespaces": [ 00:15:31.443 { 00:15:31.443 "nsid": 1, 00:15:31.443 "bdev_name": "Malloc1", 00:15:31.443 "name": "Malloc1", 00:15:31.443 "nguid": "2396B1DCE78546D995E958A0A1A87AE4", 00:15:31.443 "uuid": "2396b1dc-e785-46d9-95e9-58a0a1a87ae4" 00:15:31.443 }, 00:15:31.443 { 00:15:31.443 "nsid": 2, 00:15:31.443 "bdev_name": "Malloc3", 00:15:31.443 "name": "Malloc3", 00:15:31.443 "nguid": "C6E60A2B49604ED3963407AB35070763", 00:15:31.443 "uuid": "c6e60a2b-4960-4ed3-9634-07ab35070763" 00:15:31.443 } 00:15:31.443 ] 00:15:31.443 }, 00:15:31.443 { 00:15:31.443 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:31.443 "subtype": "NVMe", 00:15:31.443 "listen_addresses": [ 00:15:31.443 { 00:15:31.443 "trtype": "VFIOUSER", 00:15:31.443 "adrfam": "IPv4", 00:15:31.443 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:31.443 "trsvcid": "0" 00:15:31.443 } 00:15:31.443 ], 00:15:31.443 "allow_any_host": true, 00:15:31.443 "hosts": [], 00:15:31.443 "serial_number": "SPDK2", 00:15:31.443 "model_number": "SPDK bdev Controller", 00:15:31.443 "max_namespaces": 32, 00:15:31.443 "min_cntlid": 1, 00:15:31.443 "max_cntlid": 65519, 00:15:31.443 "namespaces": [ 00:15:31.443 { 00:15:31.443 "nsid": 1, 00:15:31.443 "bdev_name": "Malloc2", 00:15:31.443 "name": "Malloc2", 00:15:31.443 "nguid": "A5C770568EF947228B5C2FEB25E93A4F", 00:15:31.443 "uuid": "a5c77056-8ef9-4722-8b5c-2feb25e93a4f" 00:15:31.443 } 00:15:31.443 ] 00:15:31.443 } 00:15:31.443 ] 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2979651 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:31.443 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:31.703 [2024-12-06 15:32:37.452605] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.703 Malloc4 00:15:31.703 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:31.962 [2024-12-06 15:32:37.705542] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.962 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:31.962 Asynchronous Event Request test 00:15:31.962 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.962 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.962 Registering asynchronous event callbacks... 00:15:31.962 Starting namespace attribute notice tests for all controllers... 00:15:31.962 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:31.962 aer_cb - Changed Namespace 00:15:31.962 Cleaning up... 00:15:31.962 [ 00:15:31.962 { 00:15:31.962 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:31.962 "subtype": "Discovery", 00:15:31.962 "listen_addresses": [], 00:15:31.962 "allow_any_host": true, 00:15:31.962 "hosts": [] 00:15:31.962 }, 00:15:31.962 { 00:15:31.962 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:31.962 "subtype": "NVMe", 00:15:31.962 "listen_addresses": [ 00:15:31.962 { 00:15:31.962 "trtype": "VFIOUSER", 00:15:31.962 "adrfam": "IPv4", 00:15:31.962 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:31.962 "trsvcid": "0" 00:15:31.962 } 00:15:31.962 ], 00:15:31.962 "allow_any_host": true, 00:15:31.962 "hosts": [], 00:15:31.962 "serial_number": "SPDK1", 00:15:31.962 "model_number": "SPDK bdev Controller", 00:15:31.962 "max_namespaces": 32, 00:15:31.962 "min_cntlid": 1, 00:15:31.962 "max_cntlid": 65519, 00:15:31.962 "namespaces": [ 00:15:31.962 { 00:15:31.962 "nsid": 1, 00:15:31.962 "bdev_name": "Malloc1", 00:15:31.962 "name": "Malloc1", 00:15:31.962 "nguid": "2396B1DCE78546D995E958A0A1A87AE4", 00:15:31.962 "uuid": "2396b1dc-e785-46d9-95e9-58a0a1a87ae4" 00:15:31.962 }, 00:15:31.962 { 00:15:31.962 "nsid": 2, 00:15:31.962 "bdev_name": "Malloc3", 00:15:31.962 "name": "Malloc3", 00:15:31.962 "nguid": "C6E60A2B49604ED3963407AB35070763", 00:15:31.962 "uuid": "c6e60a2b-4960-4ed3-9634-07ab35070763" 00:15:31.962 } 00:15:31.962 ] 00:15:31.962 }, 00:15:31.962 { 00:15:31.962 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:31.962 "subtype": "NVMe", 00:15:31.962 "listen_addresses": [ 00:15:31.962 { 00:15:31.962 "trtype": "VFIOUSER", 00:15:31.962 "adrfam": "IPv4", 00:15:31.962 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:31.962 "trsvcid": "0" 00:15:31.962 } 00:15:31.962 ], 00:15:31.962 "allow_any_host": true, 00:15:31.962 "hosts": [], 00:15:31.962 "serial_number": "SPDK2", 00:15:31.962 "model_number": "SPDK bdev Controller", 00:15:31.962 "max_namespaces": 32, 00:15:31.962 "min_cntlid": 1, 00:15:31.962 "max_cntlid": 65519, 00:15:31.962 "namespaces": [ 00:15:31.962 { 00:15:31.962 "nsid": 1, 00:15:31.962 "bdev_name": "Malloc2", 00:15:31.963 "name": "Malloc2", 00:15:31.963 "nguid": "A5C770568EF947228B5C2FEB25E93A4F", 00:15:31.963 "uuid": "a5c77056-8ef9-4722-8b5c-2feb25e93a4f" 00:15:31.963 }, 00:15:31.963 { 00:15:31.963 "nsid": 2, 00:15:31.963 "bdev_name": "Malloc4", 00:15:31.963 "name": "Malloc4", 00:15:31.963 "nguid": "38820D3A3D70442299368287D089309B", 00:15:31.963 "uuid": "38820d3a-3d70-4422-9936-8287d089309b" 00:15:31.963 } 00:15:31.963 ] 00:15:31.963 } 00:15:31.963 ] 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2979651 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2972028 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2972028 ']' 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2972028 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.963 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2972028 00:15:32.222 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.222 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.222 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2972028' 00:15:32.222 killing process with pid 2972028 00:15:32.222 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2972028 00:15:32.222 15:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2972028 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2979885 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2979885' 00:15:32.481 Process pid: 2979885 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2979885 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2979885 ']' 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.481 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:32.481 [2024-12-06 15:32:38.272501] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:32.481 [2024-12-06 15:32:38.273303] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:15:32.481 [2024-12-06 15:32:38.273340] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.481 [2024-12-06 15:32:38.346294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.481 [2024-12-06 15:32:38.385755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.481 [2024-12-06 15:32:38.385794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.481 [2024-12-06 15:32:38.385801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.481 [2024-12-06 15:32:38.385808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.481 [2024-12-06 15:32:38.385813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.481 [2024-12-06 15:32:38.387332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.481 [2024-12-06 15:32:38.387442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.481 [2024-12-06 15:32:38.387549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.481 [2024-12-06 15:32:38.387550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.481 [2024-12-06 15:32:38.455203] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:32.481 [2024-12-06 15:32:38.455896] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:32.481 [2024-12-06 15:32:38.456057] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:32.481 [2024-12-06 15:32:38.456222] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:32.481 [2024-12-06 15:32:38.456292] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:32.740 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.740 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:32.740 15:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:33.676 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:33.935 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:33.935 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:33.935 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.935 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:33.935 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:33.935 Malloc1 00:15:33.935 15:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:34.193 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:34.451 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:34.709 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.709 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:34.709 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:34.968 Malloc2 00:15:34.968 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:35.226 15:32:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:35.226 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2979885 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2979885 ']' 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2979885 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2979885 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2979885' 00:15:35.485 killing process with pid 2979885 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2979885 00:15:35.485 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2979885 00:15:35.744 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:35.745 00:15:35.745 real 0m50.866s 00:15:35.745 user 3m16.673s 00:15:35.745 sys 0m3.317s 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:35.745 ************************************ 00:15:35.745 END TEST nvmf_vfio_user 00:15:35.745 ************************************ 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.745 ************************************ 00:15:35.745 START TEST nvmf_vfio_user_nvme_compliance 00:15:35.745 ************************************ 00:15:35.745 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:36.004 * Looking for test storage... 00:15:36.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:36.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.004 --rc genhtml_branch_coverage=1 00:15:36.004 --rc genhtml_function_coverage=1 00:15:36.004 --rc genhtml_legend=1 00:15:36.004 --rc geninfo_all_blocks=1 00:15:36.004 --rc geninfo_unexecuted_blocks=1 00:15:36.004 00:15:36.004 ' 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:36.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.004 --rc genhtml_branch_coverage=1 00:15:36.004 --rc genhtml_function_coverage=1 00:15:36.004 --rc genhtml_legend=1 00:15:36.004 --rc geninfo_all_blocks=1 00:15:36.004 --rc geninfo_unexecuted_blocks=1 00:15:36.004 00:15:36.004 ' 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:36.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.004 --rc genhtml_branch_coverage=1 00:15:36.004 --rc genhtml_function_coverage=1 00:15:36.004 --rc genhtml_legend=1 00:15:36.004 --rc geninfo_all_blocks=1 00:15:36.004 --rc geninfo_unexecuted_blocks=1 00:15:36.004 00:15:36.004 ' 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:36.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.004 --rc genhtml_branch_coverage=1 00:15:36.004 --rc genhtml_function_coverage=1 00:15:36.004 --rc genhtml_legend=1 00:15:36.004 --rc geninfo_all_blocks=1 00:15:36.004 --rc geninfo_unexecuted_blocks=1 00:15:36.004 00:15:36.004 ' 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.004 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:36.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2980645 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2980645' 00:15:36.005 Process pid: 2980645 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2980645 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2980645 ']' 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.005 15:32:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:36.005 [2024-12-06 15:32:41.950471] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:15:36.005 [2024-12-06 15:32:41.950521] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.264 [2024-12-06 15:32:42.023860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:36.264 [2024-12-06 15:32:42.065548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.264 [2024-12-06 15:32:42.065585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.264 [2024-12-06 15:32:42.065592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.264 [2024-12-06 15:32:42.065598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.264 [2024-12-06 15:32:42.065603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.264 [2024-12-06 15:32:42.066937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.264 [2024-12-06 15:32:42.067045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.264 [2024-12-06 15:32:42.067045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.264 15:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.264 15:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:36.264 15:32:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.200 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.458 malloc0 00:15:37.458 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.458 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:37.458 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.458 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.458 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.459 15:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:37.459 00:15:37.459 00:15:37.459 CUnit - A unit testing framework for C - Version 2.1-3 00:15:37.459 http://cunit.sourceforge.net/ 00:15:37.459 00:15:37.459 00:15:37.459 Suite: nvme_compliance 00:15:37.459 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 15:32:43.393783] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.459 [2024-12-06 15:32:43.395117] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:37.459 [2024-12-06 15:32:43.395131] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:37.459 [2024-12-06 15:32:43.395138] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:37.459 [2024-12-06 15:32:43.396803] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.459 passed 00:15:37.717 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 15:32:43.477351] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.717 [2024-12-06 15:32:43.480370] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.717 passed 00:15:37.717 Test: admin_identify_ns ...[2024-12-06 15:32:43.556566] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.717 [2024-12-06 15:32:43.620381] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:37.717 [2024-12-06 15:32:43.628386] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:37.717 [2024-12-06 15:32:43.648474] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.717 passed 00:15:37.975 Test: admin_get_features_mandatory_features ...[2024-12-06 15:32:43.721963] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.975 [2024-12-06 15:32:43.724984] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.975 passed 00:15:37.975 Test: admin_get_features_optional_features ...[2024-12-06 15:32:43.803511] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.975 [2024-12-06 15:32:43.806525] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.975 passed 00:15:37.975 Test: admin_set_features_number_of_queues ...[2024-12-06 15:32:43.881112] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.233 [2024-12-06 15:32:43.986461] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.233 passed 00:15:38.234 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 15:32:44.060097] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.234 [2024-12-06 15:32:44.063118] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.234 passed 00:15:38.234 Test: admin_get_log_page_with_lpo ...[2024-12-06 15:32:44.139707] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.234 [2024-12-06 15:32:44.208374] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:38.234 [2024-12-06 15:32:44.221420] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.491 passed 00:15:38.491 Test: fabric_property_get ...[2024-12-06 15:32:44.295117] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.491 [2024-12-06 15:32:44.296363] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:38.491 [2024-12-06 15:32:44.298133] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.491 passed 00:15:38.491 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 15:32:44.374632] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.491 [2024-12-06 15:32:44.375858] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:38.491 [2024-12-06 15:32:44.377651] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.491 passed 00:15:38.491 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 15:32:44.453609] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.750 [2024-12-06 15:32:44.541379] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:38.750 [2024-12-06 15:32:44.557381] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:38.750 [2024-12-06 15:32:44.562461] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.750 passed 00:15:38.750 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 15:32:44.636024] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.750 [2024-12-06 15:32:44.637241] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:38.750 [2024-12-06 15:32:44.639052] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.750 passed 00:15:38.750 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 15:32:44.715858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.009 [2024-12-06 15:32:44.792379] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:39.009 [2024-12-06 15:32:44.816378] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:39.009 [2024-12-06 15:32:44.821455] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.009 passed 00:15:39.009 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 15:32:44.893971] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.009 [2024-12-06 15:32:44.895190] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:39.009 [2024-12-06 15:32:44.895211] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:39.009 [2024-12-06 15:32:44.896989] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.009 passed 00:15:39.009 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 15:32:44.974630] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.265 [2024-12-06 15:32:45.067375] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:39.265 [2024-12-06 15:32:45.075382] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:39.265 [2024-12-06 15:32:45.083396] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:39.265 [2024-12-06 15:32:45.091383] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:39.265 [2024-12-06 15:32:45.120455] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.265 passed 00:15:39.265 Test: admin_create_io_sq_verify_pc ...[2024-12-06 15:32:45.193975] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.265 [2024-12-06 15:32:45.212381] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:39.265 [2024-12-06 15:32:45.230163] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.265 passed 00:15:39.522 Test: admin_create_io_qp_max_qps ...[2024-12-06 15:32:45.303665] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.459 [2024-12-06 15:32:46.397378] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:41.024 [2024-12-06 15:32:46.776409] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.024 passed 00:15:41.024 Test: admin_create_io_sq_shared_cq ...[2024-12-06 15:32:46.851160] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.024 [2024-12-06 15:32:46.982374] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:41.024 [2024-12-06 15:32:47.019441] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.282 passed 00:15:41.282 00:15:41.282 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.282 suites 1 1 n/a 0 0 00:15:41.282 tests 18 18 18 0 0 00:15:41.282 asserts 360 360 360 0 n/a 00:15:41.282 00:15:41.282 Elapsed time = 1.488 seconds 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2980645 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2980645 ']' 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2980645 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2980645 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2980645' 00:15:41.282 killing process with pid 2980645 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2980645 00:15:41.282 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2980645 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:41.541 00:15:41.541 real 0m5.604s 00:15:41.541 user 0m15.632s 00:15:41.541 sys 0m0.517s 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.541 ************************************ 00:15:41.541 END TEST nvmf_vfio_user_nvme_compliance 00:15:41.541 ************************************ 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.541 ************************************ 00:15:41.541 START TEST nvmf_vfio_user_fuzz 00:15:41.541 ************************************ 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:41.541 * Looking for test storage... 00:15:41.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.541 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.800 --rc genhtml_branch_coverage=1 00:15:41.800 --rc genhtml_function_coverage=1 00:15:41.800 --rc genhtml_legend=1 00:15:41.800 --rc geninfo_all_blocks=1 00:15:41.800 --rc geninfo_unexecuted_blocks=1 00:15:41.800 00:15:41.800 ' 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.800 --rc genhtml_branch_coverage=1 00:15:41.800 --rc genhtml_function_coverage=1 00:15:41.800 --rc genhtml_legend=1 00:15:41.800 --rc geninfo_all_blocks=1 00:15:41.800 --rc geninfo_unexecuted_blocks=1 00:15:41.800 00:15:41.800 ' 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.800 --rc genhtml_branch_coverage=1 00:15:41.800 --rc genhtml_function_coverage=1 00:15:41.800 --rc genhtml_legend=1 00:15:41.800 --rc geninfo_all_blocks=1 00:15:41.800 --rc geninfo_unexecuted_blocks=1 00:15:41.800 00:15:41.800 ' 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:41.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.800 --rc genhtml_branch_coverage=1 00:15:41.800 --rc genhtml_function_coverage=1 00:15:41.800 --rc genhtml_legend=1 00:15:41.800 --rc geninfo_all_blocks=1 00:15:41.800 --rc geninfo_unexecuted_blocks=1 00:15:41.800 00:15:41.800 ' 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.800 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2981628 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2981628' 00:15:41.801 Process pid: 2981628 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2981628 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2981628 ']' 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.801 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.058 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.058 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:42.058 15:32:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.998 malloc0 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:42.998 15:32:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:15.244 Fuzzing completed. Shutting down the fuzz application 00:16:15.244 00:16:15.244 Dumping successful admin opcodes: 00:16:15.244 9, 10, 00:16:15.244 Dumping successful io opcodes: 00:16:15.244 0, 00:16:15.244 NS: 0x20000081ef00 I/O qp, Total commands completed: 983690, total successful commands: 3855, random_seed: 4115628224 00:16:15.244 NS: 0x20000081ef00 admin qp, Total commands completed: 243504, total successful commands: 57, random_seed: 3533633536 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2981628 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2981628 ']' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2981628 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2981628 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2981628' 00:16:15.244 killing process with pid 2981628 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2981628 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2981628 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:15.244 00:16:15.244 real 0m32.237s 00:16:15.244 user 0m29.294s 00:16:15.244 sys 0m31.827s 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:15.244 ************************************ 00:16:15.244 END TEST nvmf_vfio_user_fuzz 00:16:15.244 ************************************ 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:15.244 ************************************ 00:16:15.244 START TEST nvmf_auth_target 00:16:15.244 ************************************ 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:15.244 * Looking for test storage... 00:16:15.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:15.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.244 --rc genhtml_branch_coverage=1 00:16:15.244 --rc genhtml_function_coverage=1 00:16:15.244 --rc genhtml_legend=1 00:16:15.244 --rc geninfo_all_blocks=1 00:16:15.244 --rc geninfo_unexecuted_blocks=1 00:16:15.244 00:16:15.244 ' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:15.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.244 --rc genhtml_branch_coverage=1 00:16:15.244 --rc genhtml_function_coverage=1 00:16:15.244 --rc genhtml_legend=1 00:16:15.244 --rc geninfo_all_blocks=1 00:16:15.244 --rc geninfo_unexecuted_blocks=1 00:16:15.244 00:16:15.244 ' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:15.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.244 --rc genhtml_branch_coverage=1 00:16:15.244 --rc genhtml_function_coverage=1 00:16:15.244 --rc genhtml_legend=1 00:16:15.244 --rc geninfo_all_blocks=1 00:16:15.244 --rc geninfo_unexecuted_blocks=1 00:16:15.244 00:16:15.244 ' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:15.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.244 --rc genhtml_branch_coverage=1 00:16:15.244 --rc genhtml_function_coverage=1 00:16:15.244 --rc genhtml_legend=1 00:16:15.244 --rc geninfo_all_blocks=1 00:16:15.244 --rc geninfo_unexecuted_blocks=1 00:16:15.244 00:16:15.244 ' 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.244 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:15.245 15:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:20.516 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:20.516 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:20.516 Found net devices under 0000:86:00.0: cvl_0_0 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.516 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:20.517 Found net devices under 0000:86:00.1: cvl_0_1 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:20.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:16:20.517 00:16:20.517 --- 10.0.0.2 ping statistics --- 00:16:20.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.517 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:16:20.517 00:16:20.517 --- 10.0.0.1 ping statistics --- 00:16:20.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.517 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2990449 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2990449 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2990449 ']' 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.517 15:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2990478 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=812075f4833e0baa6f689f9676bd9c66ce8f578d322b62cb 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1x6 00:16:20.517 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 812075f4833e0baa6f689f9676bd9c66ce8f578d322b62cb 0 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 812075f4833e0baa6f689f9676bd9c66ce8f578d322b62cb 0 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=812075f4833e0baa6f689f9676bd9c66ce8f578d322b62cb 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1x6 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1x6 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1x6 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=105fd4cccf4ce2f81702a062885bf9bb540c40b2e366cc4b1081ac3799a567fa 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xSn 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 105fd4cccf4ce2f81702a062885bf9bb540c40b2e366cc4b1081ac3799a567fa 3 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 105fd4cccf4ce2f81702a062885bf9bb540c40b2e366cc4b1081ac3799a567fa 3 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=105fd4cccf4ce2f81702a062885bf9bb540c40b2e366cc4b1081ac3799a567fa 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xSn 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xSn 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.xSn 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=685b67d480c40efe0293cfb72ecba7c7 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.d1J 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 685b67d480c40efe0293cfb72ecba7c7 1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 685b67d480c40efe0293cfb72ecba7c7 1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=685b67d480c40efe0293cfb72ecba7c7 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.d1J 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.d1J 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.d1J 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=92ed100d2fd0ef394745e3f801075391107e0696705050b2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.L3l 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 92ed100d2fd0ef394745e3f801075391107e0696705050b2 2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 92ed100d2fd0ef394745e3f801075391107e0696705050b2 2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=92ed100d2fd0ef394745e3f801075391107e0696705050b2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.L3l 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.L3l 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.L3l 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e1b8012abe15af643660494953a1898677aa50e2350c3766 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yj2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e1b8012abe15af643660494953a1898677aa50e2350c3766 2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e1b8012abe15af643660494953a1898677aa50e2350c3766 2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e1b8012abe15af643660494953a1898677aa50e2350c3766 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yj2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yj2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.yj2 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:20.518 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=12a75337bce4d940e1e15d501c03a489 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4bH 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 12a75337bce4d940e1e15d501c03a489 1 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 12a75337bce4d940e1e15d501c03a489 1 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=12a75337bce4d940e1e15d501c03a489 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:20.519 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4bH 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4bH 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.4bH 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=37463e6b8a023ff2a50700d69c544f1021278c8a3dae46984bf192bb497a2fea 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uYH 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 37463e6b8a023ff2a50700d69c544f1021278c8a3dae46984bf192bb497a2fea 3 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 37463e6b8a023ff2a50700d69c544f1021278c8a3dae46984bf192bb497a2fea 3 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=37463e6b8a023ff2a50700d69c544f1021278c8a3dae46984bf192bb497a2fea 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uYH 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uYH 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.uYH 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2990449 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2990449 ']' 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.778 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2990478 /var/tmp/host.sock 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2990478 ']' 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:21.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:21.036 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1x6 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.037 15:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.037 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.037 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1x6 00:16:21.037 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1x6 00:16:21.295 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.xSn ]] 00:16:21.295 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xSn 00:16:21.295 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.295 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.295 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.295 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xSn 00:16:21.295 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xSn 00:16:21.553 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.554 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.d1J 00:16:21.554 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.554 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.554 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.554 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.d1J 00:16:21.554 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.d1J 00:16:21.813 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.L3l ]] 00:16:21.813 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L3l 00:16:21.813 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.813 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.813 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.813 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L3l 00:16:21.813 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L3l 00:16:22.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:22.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yj2 00:16:22.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yj2 00:16:22.072 15:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yj2 00:16:22.072 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.4bH ]] 00:16:22.072 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4bH 00:16:22.072 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.072 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.072 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.072 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4bH 00:16:22.072 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4bH 00:16:22.331 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:22.331 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uYH 00:16:22.331 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.332 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.332 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.332 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.uYH 00:16:22.332 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.uYH 00:16:22.590 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:22.590 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:22.590 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.590 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.590 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.590 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.849 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.849 00:16:23.107 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.107 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.107 15:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.107 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.107 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.107 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.107 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.107 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.107 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.107 { 00:16:23.107 "cntlid": 1, 00:16:23.107 "qid": 0, 00:16:23.107 "state": "enabled", 00:16:23.107 "thread": "nvmf_tgt_poll_group_000", 00:16:23.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.107 "listen_address": { 00:16:23.107 "trtype": "TCP", 00:16:23.107 "adrfam": "IPv4", 00:16:23.107 "traddr": "10.0.0.2", 00:16:23.107 "trsvcid": "4420" 00:16:23.107 }, 00:16:23.107 "peer_address": { 00:16:23.107 "trtype": "TCP", 00:16:23.107 "adrfam": "IPv4", 00:16:23.107 "traddr": "10.0.0.1", 00:16:23.107 "trsvcid": "47118" 00:16:23.107 }, 00:16:23.107 "auth": { 00:16:23.107 "state": "completed", 00:16:23.107 "digest": "sha256", 00:16:23.107 "dhgroup": "null" 00:16:23.107 } 00:16:23.107 } 00:16:23.107 ]' 00:16:23.107 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.366 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.366 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.366 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.366 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.366 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.366 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.366 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.625 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:23.625 15:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.192 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.451 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:24.451 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.451 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.451 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.451 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.451 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.451 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.452 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.452 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.452 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.452 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.452 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.452 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.710 00:16:24.710 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.710 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.710 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.970 { 00:16:24.970 "cntlid": 3, 00:16:24.970 "qid": 0, 00:16:24.970 "state": "enabled", 00:16:24.970 "thread": "nvmf_tgt_poll_group_000", 00:16:24.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:24.970 "listen_address": { 00:16:24.970 "trtype": "TCP", 00:16:24.970 "adrfam": "IPv4", 00:16:24.970 "traddr": "10.0.0.2", 00:16:24.970 "trsvcid": "4420" 00:16:24.970 }, 00:16:24.970 "peer_address": { 00:16:24.970 "trtype": "TCP", 00:16:24.970 "adrfam": "IPv4", 00:16:24.970 "traddr": "10.0.0.1", 00:16:24.970 "trsvcid": "35678" 00:16:24.970 }, 00:16:24.970 "auth": { 00:16:24.970 "state": "completed", 00:16:24.970 "digest": "sha256", 00:16:24.970 "dhgroup": "null" 00:16:24.970 } 00:16:24.970 } 00:16:24.970 ]' 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.970 15:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.229 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:25.229 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.797 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.056 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.057 15:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.316 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.316 { 00:16:26.316 "cntlid": 5, 00:16:26.316 "qid": 0, 00:16:26.316 "state": "enabled", 00:16:26.316 "thread": "nvmf_tgt_poll_group_000", 00:16:26.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.316 "listen_address": { 00:16:26.316 "trtype": "TCP", 00:16:26.316 "adrfam": "IPv4", 00:16:26.316 "traddr": "10.0.0.2", 00:16:26.316 "trsvcid": "4420" 00:16:26.316 }, 00:16:26.316 "peer_address": { 00:16:26.316 "trtype": "TCP", 00:16:26.316 "adrfam": "IPv4", 00:16:26.316 "traddr": "10.0.0.1", 00:16:26.316 "trsvcid": "35696" 00:16:26.316 }, 00:16:26.316 "auth": { 00:16:26.316 "state": "completed", 00:16:26.316 "digest": "sha256", 00:16:26.316 "dhgroup": "null" 00:16:26.316 } 00:16:26.316 } 00:16:26.316 ]' 00:16:26.316 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.575 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.575 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.575 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.575 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.575 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.575 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.575 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.834 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:26.834 15:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.401 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.659 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.918 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.918 { 00:16:27.918 "cntlid": 7, 00:16:27.918 "qid": 0, 00:16:27.918 "state": "enabled", 00:16:27.918 "thread": "nvmf_tgt_poll_group_000", 00:16:27.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:27.918 "listen_address": { 00:16:27.918 "trtype": "TCP", 00:16:27.918 "adrfam": "IPv4", 00:16:27.918 "traddr": "10.0.0.2", 00:16:27.918 "trsvcid": "4420" 00:16:27.918 }, 00:16:27.918 "peer_address": { 00:16:27.918 "trtype": "TCP", 00:16:27.918 "adrfam": "IPv4", 00:16:27.918 "traddr": "10.0.0.1", 00:16:27.918 "trsvcid": "35728" 00:16:27.918 }, 00:16:27.918 "auth": { 00:16:27.918 "state": "completed", 00:16:27.918 "digest": "sha256", 00:16:27.918 "dhgroup": "null" 00:16:27.918 } 00:16:27.918 } 00:16:27.918 ]' 00:16:27.918 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.177 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.177 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.177 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.177 15:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.177 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.177 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.177 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.437 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:28.437 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.005 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.006 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.006 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.006 15:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.267 00:16:29.267 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.267 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.267 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.530 { 00:16:29.530 "cntlid": 9, 00:16:29.530 "qid": 0, 00:16:29.530 "state": "enabled", 00:16:29.530 "thread": "nvmf_tgt_poll_group_000", 00:16:29.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:29.530 "listen_address": { 00:16:29.530 "trtype": "TCP", 00:16:29.530 "adrfam": "IPv4", 00:16:29.530 "traddr": "10.0.0.2", 00:16:29.530 "trsvcid": "4420" 00:16:29.530 }, 00:16:29.530 "peer_address": { 00:16:29.530 "trtype": "TCP", 00:16:29.530 "adrfam": "IPv4", 00:16:29.530 "traddr": "10.0.0.1", 00:16:29.530 "trsvcid": "35746" 00:16:29.530 }, 00:16:29.530 "auth": { 00:16:29.530 "state": "completed", 00:16:29.530 "digest": "sha256", 00:16:29.530 "dhgroup": "ffdhe2048" 00:16:29.530 } 00:16:29.530 } 00:16:29.530 ]' 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.530 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.788 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.788 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.788 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.788 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.788 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:29.788 15:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.354 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.612 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.613 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.871 00:16:30.871 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.871 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.871 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.132 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.132 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.132 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.132 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.132 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.132 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.132 { 00:16:31.132 "cntlid": 11, 00:16:31.132 "qid": 0, 00:16:31.132 "state": "enabled", 00:16:31.132 "thread": "nvmf_tgt_poll_group_000", 00:16:31.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.132 "listen_address": { 00:16:31.132 "trtype": "TCP", 00:16:31.132 "adrfam": "IPv4", 00:16:31.132 "traddr": "10.0.0.2", 00:16:31.132 "trsvcid": "4420" 00:16:31.132 }, 00:16:31.132 "peer_address": { 00:16:31.132 "trtype": "TCP", 00:16:31.132 "adrfam": "IPv4", 00:16:31.132 "traddr": "10.0.0.1", 00:16:31.132 "trsvcid": "35764" 00:16:31.132 }, 00:16:31.132 "auth": { 00:16:31.132 "state": "completed", 00:16:31.132 "digest": "sha256", 00:16:31.132 "dhgroup": "ffdhe2048" 00:16:31.132 } 00:16:31.132 } 00:16:31.132 ]' 00:16:31.132 15:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.132 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.132 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.132 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.132 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.132 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.132 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.132 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.391 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:31.391 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.959 15:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.218 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.478 00:16:32.478 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.478 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.478 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.737 { 00:16:32.737 "cntlid": 13, 00:16:32.737 "qid": 0, 00:16:32.737 "state": "enabled", 00:16:32.737 "thread": "nvmf_tgt_poll_group_000", 00:16:32.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:32.737 "listen_address": { 00:16:32.737 "trtype": "TCP", 00:16:32.737 "adrfam": "IPv4", 00:16:32.737 "traddr": "10.0.0.2", 00:16:32.737 "trsvcid": "4420" 00:16:32.737 }, 00:16:32.737 "peer_address": { 00:16:32.737 "trtype": "TCP", 00:16:32.737 "adrfam": "IPv4", 00:16:32.737 "traddr": "10.0.0.1", 00:16:32.737 "trsvcid": "35790" 00:16:32.737 }, 00:16:32.737 "auth": { 00:16:32.737 "state": "completed", 00:16:32.737 "digest": "sha256", 00:16:32.737 "dhgroup": "ffdhe2048" 00:16:32.737 } 00:16:32.737 } 00:16:32.737 ]' 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.737 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.738 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.996 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:32.996 15:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.564 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.823 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.083 00:16:34.083 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.083 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.083 15:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.341 { 00:16:34.341 "cntlid": 15, 00:16:34.341 "qid": 0, 00:16:34.341 "state": "enabled", 00:16:34.341 "thread": "nvmf_tgt_poll_group_000", 00:16:34.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.341 "listen_address": { 00:16:34.341 "trtype": "TCP", 00:16:34.341 "adrfam": "IPv4", 00:16:34.341 "traddr": "10.0.0.2", 00:16:34.341 "trsvcid": "4420" 00:16:34.341 }, 00:16:34.341 "peer_address": { 00:16:34.341 "trtype": "TCP", 00:16:34.341 "adrfam": "IPv4", 00:16:34.341 "traddr": "10.0.0.1", 00:16:34.341 "trsvcid": "35814" 00:16:34.341 }, 00:16:34.341 "auth": { 00:16:34.341 "state": "completed", 00:16:34.341 "digest": "sha256", 00:16:34.341 "dhgroup": "ffdhe2048" 00:16:34.341 } 00:16:34.341 } 00:16:34.341 ]' 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.341 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.599 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:34.599 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:35.164 15:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.164 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.423 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.681 00:16:35.681 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.681 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.681 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.681 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.681 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.681 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.681 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.939 { 00:16:35.939 "cntlid": 17, 00:16:35.939 "qid": 0, 00:16:35.939 "state": "enabled", 00:16:35.939 "thread": "nvmf_tgt_poll_group_000", 00:16:35.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:35.939 "listen_address": { 00:16:35.939 "trtype": "TCP", 00:16:35.939 "adrfam": "IPv4", 00:16:35.939 "traddr": "10.0.0.2", 00:16:35.939 "trsvcid": "4420" 00:16:35.939 }, 00:16:35.939 "peer_address": { 00:16:35.939 "trtype": "TCP", 00:16:35.939 "adrfam": "IPv4", 00:16:35.939 "traddr": "10.0.0.1", 00:16:35.939 "trsvcid": "47884" 00:16:35.939 }, 00:16:35.939 "auth": { 00:16:35.939 "state": "completed", 00:16:35.939 "digest": "sha256", 00:16:35.939 "dhgroup": "ffdhe3072" 00:16:35.939 } 00:16:35.939 } 00:16:35.939 ]' 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.939 15:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.198 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:36.198 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:36.763 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.021 15:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.325 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.326 { 00:16:37.326 "cntlid": 19, 00:16:37.326 "qid": 0, 00:16:37.326 "state": "enabled", 00:16:37.326 "thread": "nvmf_tgt_poll_group_000", 00:16:37.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.326 "listen_address": { 00:16:37.326 "trtype": "TCP", 00:16:37.326 "adrfam": "IPv4", 00:16:37.326 "traddr": "10.0.0.2", 00:16:37.326 "trsvcid": "4420" 00:16:37.326 }, 00:16:37.326 "peer_address": { 00:16:37.326 "trtype": "TCP", 00:16:37.326 "adrfam": "IPv4", 00:16:37.326 "traddr": "10.0.0.1", 00:16:37.326 "trsvcid": "47908" 00:16:37.326 }, 00:16:37.326 "auth": { 00:16:37.326 "state": "completed", 00:16:37.326 "digest": "sha256", 00:16:37.326 "dhgroup": "ffdhe3072" 00:16:37.326 } 00:16:37.326 } 00:16:37.326 ]' 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.326 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.607 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.607 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.607 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.607 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.607 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.607 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:37.607 15:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.174 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.432 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.690 00:16:38.690 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.690 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.690 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.948 { 00:16:38.948 "cntlid": 21, 00:16:38.948 "qid": 0, 00:16:38.948 "state": "enabled", 00:16:38.948 "thread": "nvmf_tgt_poll_group_000", 00:16:38.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.948 "listen_address": { 00:16:38.948 "trtype": "TCP", 00:16:38.948 "adrfam": "IPv4", 00:16:38.948 "traddr": "10.0.0.2", 00:16:38.948 "trsvcid": "4420" 00:16:38.948 }, 00:16:38.948 "peer_address": { 00:16:38.948 "trtype": "TCP", 00:16:38.948 "adrfam": "IPv4", 00:16:38.948 "traddr": "10.0.0.1", 00:16:38.948 "trsvcid": "47946" 00:16:38.948 }, 00:16:38.948 "auth": { 00:16:38.948 "state": "completed", 00:16:38.948 "digest": "sha256", 00:16:38.948 "dhgroup": "ffdhe3072" 00:16:38.948 } 00:16:38.948 } 00:16:38.948 ]' 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.948 15:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.206 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:39.206 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:39.772 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.772 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.772 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.772 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.772 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.773 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.773 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.773 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.031 15:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.289 00:16:40.289 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.289 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.289 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.548 { 00:16:40.548 "cntlid": 23, 00:16:40.548 "qid": 0, 00:16:40.548 "state": "enabled", 00:16:40.548 "thread": "nvmf_tgt_poll_group_000", 00:16:40.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.548 "listen_address": { 00:16:40.548 "trtype": "TCP", 00:16:40.548 "adrfam": "IPv4", 00:16:40.548 "traddr": "10.0.0.2", 00:16:40.548 "trsvcid": "4420" 00:16:40.548 }, 00:16:40.548 "peer_address": { 00:16:40.548 "trtype": "TCP", 00:16:40.548 "adrfam": "IPv4", 00:16:40.548 "traddr": "10.0.0.1", 00:16:40.548 "trsvcid": "47968" 00:16:40.548 }, 00:16:40.548 "auth": { 00:16:40.548 "state": "completed", 00:16:40.548 "digest": "sha256", 00:16:40.548 "dhgroup": "ffdhe3072" 00:16:40.548 } 00:16:40.548 } 00:16:40.548 ]' 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.548 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.807 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:40.807 15:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.374 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.632 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.891 00:16:41.891 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.891 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.891 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.150 { 00:16:42.150 "cntlid": 25, 00:16:42.150 "qid": 0, 00:16:42.150 "state": "enabled", 00:16:42.150 "thread": "nvmf_tgt_poll_group_000", 00:16:42.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.150 "listen_address": { 00:16:42.150 "trtype": "TCP", 00:16:42.150 "adrfam": "IPv4", 00:16:42.150 "traddr": "10.0.0.2", 00:16:42.150 "trsvcid": "4420" 00:16:42.150 }, 00:16:42.150 "peer_address": { 00:16:42.150 "trtype": "TCP", 00:16:42.150 "adrfam": "IPv4", 00:16:42.150 "traddr": "10.0.0.1", 00:16:42.150 "trsvcid": "48012" 00:16:42.150 }, 00:16:42.150 "auth": { 00:16:42.150 "state": "completed", 00:16:42.150 "digest": "sha256", 00:16:42.150 "dhgroup": "ffdhe4096" 00:16:42.150 } 00:16:42.150 } 00:16:42.150 ]' 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.150 15:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.150 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.150 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.150 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.150 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.150 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.409 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:42.410 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.978 15:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.237 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.496 00:16:43.496 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.496 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.496 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.755 { 00:16:43.755 "cntlid": 27, 00:16:43.755 "qid": 0, 00:16:43.755 "state": "enabled", 00:16:43.755 "thread": "nvmf_tgt_poll_group_000", 00:16:43.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.755 "listen_address": { 00:16:43.755 "trtype": "TCP", 00:16:43.755 "adrfam": "IPv4", 00:16:43.755 "traddr": "10.0.0.2", 00:16:43.755 "trsvcid": "4420" 00:16:43.755 }, 00:16:43.755 "peer_address": { 00:16:43.755 "trtype": "TCP", 00:16:43.755 "adrfam": "IPv4", 00:16:43.755 "traddr": "10.0.0.1", 00:16:43.755 "trsvcid": "48032" 00:16:43.755 }, 00:16:43.755 "auth": { 00:16:43.755 "state": "completed", 00:16:43.755 "digest": "sha256", 00:16:43.755 "dhgroup": "ffdhe4096" 00:16:43.755 } 00:16:43.755 } 00:16:43.755 ]' 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.755 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.014 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:44.014 15:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.580 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.839 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.098 00:16:45.098 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.098 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.098 15:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.357 { 00:16:45.357 "cntlid": 29, 00:16:45.357 "qid": 0, 00:16:45.357 "state": "enabled", 00:16:45.357 "thread": "nvmf_tgt_poll_group_000", 00:16:45.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.357 "listen_address": { 00:16:45.357 "trtype": "TCP", 00:16:45.357 "adrfam": "IPv4", 00:16:45.357 "traddr": "10.0.0.2", 00:16:45.357 "trsvcid": "4420" 00:16:45.357 }, 00:16:45.357 "peer_address": { 00:16:45.357 "trtype": "TCP", 00:16:45.357 "adrfam": "IPv4", 00:16:45.357 "traddr": "10.0.0.1", 00:16:45.357 "trsvcid": "35242" 00:16:45.357 }, 00:16:45.357 "auth": { 00:16:45.357 "state": "completed", 00:16:45.357 "digest": "sha256", 00:16:45.357 "dhgroup": "ffdhe4096" 00:16:45.357 } 00:16:45.357 } 00:16:45.357 ]' 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.357 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.616 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:45.616 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:46.183 15:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.183 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.183 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.183 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.183 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.183 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.183 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.183 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.442 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.699 00:16:46.699 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.699 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.699 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.958 { 00:16:46.958 "cntlid": 31, 00:16:46.958 "qid": 0, 00:16:46.958 "state": "enabled", 00:16:46.958 "thread": "nvmf_tgt_poll_group_000", 00:16:46.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:46.958 "listen_address": { 00:16:46.958 "trtype": "TCP", 00:16:46.958 "adrfam": "IPv4", 00:16:46.958 "traddr": "10.0.0.2", 00:16:46.958 "trsvcid": "4420" 00:16:46.958 }, 00:16:46.958 "peer_address": { 00:16:46.958 "trtype": "TCP", 00:16:46.958 "adrfam": "IPv4", 00:16:46.958 "traddr": "10.0.0.1", 00:16:46.958 "trsvcid": "35270" 00:16:46.958 }, 00:16:46.958 "auth": { 00:16:46.958 "state": "completed", 00:16:46.958 "digest": "sha256", 00:16:46.958 "dhgroup": "ffdhe4096" 00:16:46.958 } 00:16:46.958 } 00:16:46.958 ]' 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.958 15:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.217 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:47.217 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.786 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.045 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:48.045 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.046 15:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.304 00:16:48.304 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.304 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.304 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.563 { 00:16:48.563 "cntlid": 33, 00:16:48.563 "qid": 0, 00:16:48.563 "state": "enabled", 00:16:48.563 "thread": "nvmf_tgt_poll_group_000", 00:16:48.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.563 "listen_address": { 00:16:48.563 "trtype": "TCP", 00:16:48.563 "adrfam": "IPv4", 00:16:48.563 "traddr": "10.0.0.2", 00:16:48.563 "trsvcid": "4420" 00:16:48.563 }, 00:16:48.563 "peer_address": { 00:16:48.563 "trtype": "TCP", 00:16:48.563 "adrfam": "IPv4", 00:16:48.563 "traddr": "10.0.0.1", 00:16:48.563 "trsvcid": "35286" 00:16:48.563 }, 00:16:48.563 "auth": { 00:16:48.563 "state": "completed", 00:16:48.563 "digest": "sha256", 00:16:48.563 "dhgroup": "ffdhe6144" 00:16:48.563 } 00:16:48.563 } 00:16:48.563 ]' 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.563 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.821 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:48.821 15:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.388 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.646 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.904 00:16:49.904 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.904 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.904 15:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.162 { 00:16:50.162 "cntlid": 35, 00:16:50.162 "qid": 0, 00:16:50.162 "state": "enabled", 00:16:50.162 "thread": "nvmf_tgt_poll_group_000", 00:16:50.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.162 "listen_address": { 00:16:50.162 "trtype": "TCP", 00:16:50.162 "adrfam": "IPv4", 00:16:50.162 "traddr": "10.0.0.2", 00:16:50.162 "trsvcid": "4420" 00:16:50.162 }, 00:16:50.162 "peer_address": { 00:16:50.162 "trtype": "TCP", 00:16:50.162 "adrfam": "IPv4", 00:16:50.162 "traddr": "10.0.0.1", 00:16:50.162 "trsvcid": "35306" 00:16:50.162 }, 00:16:50.162 "auth": { 00:16:50.162 "state": "completed", 00:16:50.162 "digest": "sha256", 00:16:50.162 "dhgroup": "ffdhe6144" 00:16:50.162 } 00:16:50.162 } 00:16:50.162 ]' 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.162 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.420 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.420 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.420 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.420 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:50.420 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.988 15:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.247 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.824 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.824 { 00:16:51.824 "cntlid": 37, 00:16:51.824 "qid": 0, 00:16:51.824 "state": "enabled", 00:16:51.824 "thread": "nvmf_tgt_poll_group_000", 00:16:51.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.824 "listen_address": { 00:16:51.824 "trtype": "TCP", 00:16:51.824 "adrfam": "IPv4", 00:16:51.824 "traddr": "10.0.0.2", 00:16:51.824 "trsvcid": "4420" 00:16:51.824 }, 00:16:51.824 "peer_address": { 00:16:51.824 "trtype": "TCP", 00:16:51.824 "adrfam": "IPv4", 00:16:51.824 "traddr": "10.0.0.1", 00:16:51.824 "trsvcid": "35328" 00:16:51.824 }, 00:16:51.824 "auth": { 00:16:51.824 "state": "completed", 00:16:51.824 "digest": "sha256", 00:16:51.824 "dhgroup": "ffdhe6144" 00:16:51.824 } 00:16:51.824 } 00:16:51.824 ]' 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.824 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.825 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.825 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.083 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.083 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.083 15:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.083 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:52.083 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.649 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.908 15:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.165 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.424 { 00:16:53.424 "cntlid": 39, 00:16:53.424 "qid": 0, 00:16:53.424 "state": "enabled", 00:16:53.424 "thread": "nvmf_tgt_poll_group_000", 00:16:53.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.424 "listen_address": { 00:16:53.424 "trtype": "TCP", 00:16:53.424 "adrfam": "IPv4", 00:16:53.424 "traddr": "10.0.0.2", 00:16:53.424 "trsvcid": "4420" 00:16:53.424 }, 00:16:53.424 "peer_address": { 00:16:53.424 "trtype": "TCP", 00:16:53.424 "adrfam": "IPv4", 00:16:53.424 "traddr": "10.0.0.1", 00:16:53.424 "trsvcid": "35354" 00:16:53.424 }, 00:16:53.424 "auth": { 00:16:53.424 "state": "completed", 00:16:53.424 "digest": "sha256", 00:16:53.424 "dhgroup": "ffdhe6144" 00:16:53.424 } 00:16:53.424 } 00:16:53.424 ]' 00:16:53.424 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.682 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.682 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.682 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.682 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.682 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.682 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.682 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.939 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:53.939 15:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.504 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.505 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.505 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.505 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.505 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.505 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.505 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.505 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.070 00:16:55.070 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.070 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.070 15:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.327 { 00:16:55.327 "cntlid": 41, 00:16:55.327 "qid": 0, 00:16:55.327 "state": "enabled", 00:16:55.327 "thread": "nvmf_tgt_poll_group_000", 00:16:55.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.327 "listen_address": { 00:16:55.327 "trtype": "TCP", 00:16:55.327 "adrfam": "IPv4", 00:16:55.327 "traddr": "10.0.0.2", 00:16:55.327 "trsvcid": "4420" 00:16:55.327 }, 00:16:55.327 "peer_address": { 00:16:55.327 "trtype": "TCP", 00:16:55.327 "adrfam": "IPv4", 00:16:55.327 "traddr": "10.0.0.1", 00:16:55.327 "trsvcid": "41636" 00:16:55.327 }, 00:16:55.327 "auth": { 00:16:55.327 "state": "completed", 00:16:55.327 "digest": "sha256", 00:16:55.327 "dhgroup": "ffdhe8192" 00:16:55.327 } 00:16:55.327 } 00:16:55.327 ]' 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.327 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.585 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:55.585 15:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.151 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.409 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.977 00:16:56.977 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.977 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.977 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.236 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.236 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.236 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.236 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.236 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.236 15:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.236 { 00:16:57.236 "cntlid": 43, 00:16:57.236 "qid": 0, 00:16:57.236 "state": "enabled", 00:16:57.236 "thread": "nvmf_tgt_poll_group_000", 00:16:57.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.236 "listen_address": { 00:16:57.236 "trtype": "TCP", 00:16:57.236 "adrfam": "IPv4", 00:16:57.236 "traddr": "10.0.0.2", 00:16:57.236 "trsvcid": "4420" 00:16:57.236 }, 00:16:57.236 "peer_address": { 00:16:57.236 "trtype": "TCP", 00:16:57.236 "adrfam": "IPv4", 00:16:57.236 "traddr": "10.0.0.1", 00:16:57.236 "trsvcid": "41662" 00:16:57.236 }, 00:16:57.236 "auth": { 00:16:57.236 "state": "completed", 00:16:57.236 "digest": "sha256", 00:16:57.236 "dhgroup": "ffdhe8192" 00:16:57.236 } 00:16:57.236 } 00:16:57.236 ]' 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.236 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.495 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:57.495 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.062 15:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.320 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:58.320 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.320 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.320 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.320 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.321 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.887 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.887 { 00:16:58.887 "cntlid": 45, 00:16:58.887 "qid": 0, 00:16:58.887 "state": "enabled", 00:16:58.887 "thread": "nvmf_tgt_poll_group_000", 00:16:58.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:58.887 "listen_address": { 00:16:58.887 "trtype": "TCP", 00:16:58.887 "adrfam": "IPv4", 00:16:58.887 "traddr": "10.0.0.2", 00:16:58.887 "trsvcid": "4420" 00:16:58.887 }, 00:16:58.887 "peer_address": { 00:16:58.887 "trtype": "TCP", 00:16:58.887 "adrfam": "IPv4", 00:16:58.887 "traddr": "10.0.0.1", 00:16:58.887 "trsvcid": "41686" 00:16:58.887 }, 00:16:58.887 "auth": { 00:16:58.887 "state": "completed", 00:16:58.887 "digest": "sha256", 00:16:58.887 "dhgroup": "ffdhe8192" 00:16:58.887 } 00:16:58.887 } 00:16:58.887 ]' 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.887 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.146 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.146 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.146 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.146 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.146 15:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.404 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:59.404 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.972 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.973 15:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.540 00:17:00.540 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.540 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.540 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.800 { 00:17:00.800 "cntlid": 47, 00:17:00.800 "qid": 0, 00:17:00.800 "state": "enabled", 00:17:00.800 "thread": "nvmf_tgt_poll_group_000", 00:17:00.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:00.800 "listen_address": { 00:17:00.800 "trtype": "TCP", 00:17:00.800 "adrfam": "IPv4", 00:17:00.800 "traddr": "10.0.0.2", 00:17:00.800 "trsvcid": "4420" 00:17:00.800 }, 00:17:00.800 "peer_address": { 00:17:00.800 "trtype": "TCP", 00:17:00.800 "adrfam": "IPv4", 00:17:00.800 "traddr": "10.0.0.1", 00:17:00.800 "trsvcid": "41712" 00:17:00.800 }, 00:17:00.800 "auth": { 00:17:00.800 "state": "completed", 00:17:00.800 "digest": "sha256", 00:17:00.800 "dhgroup": "ffdhe8192" 00:17:00.800 } 00:17:00.800 } 00:17:00.800 ]' 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.800 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.059 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:01.059 15:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.626 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.885 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.144 00:17:02.144 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.144 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.144 15:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.403 { 00:17:02.403 "cntlid": 49, 00:17:02.403 "qid": 0, 00:17:02.403 "state": "enabled", 00:17:02.403 "thread": "nvmf_tgt_poll_group_000", 00:17:02.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.403 "listen_address": { 00:17:02.403 "trtype": "TCP", 00:17:02.403 "adrfam": "IPv4", 00:17:02.403 "traddr": "10.0.0.2", 00:17:02.403 "trsvcid": "4420" 00:17:02.403 }, 00:17:02.403 "peer_address": { 00:17:02.403 "trtype": "TCP", 00:17:02.403 "adrfam": "IPv4", 00:17:02.403 "traddr": "10.0.0.1", 00:17:02.403 "trsvcid": "41732" 00:17:02.403 }, 00:17:02.403 "auth": { 00:17:02.403 "state": "completed", 00:17:02.403 "digest": "sha384", 00:17:02.403 "dhgroup": "null" 00:17:02.403 } 00:17:02.403 } 00:17:02.403 ]' 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.403 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.662 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:02.662 15:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.231 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.490 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.749 00:17:03.749 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.749 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.749 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.008 { 00:17:04.008 "cntlid": 51, 00:17:04.008 "qid": 0, 00:17:04.008 "state": "enabled", 00:17:04.008 "thread": "nvmf_tgt_poll_group_000", 00:17:04.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.008 "listen_address": { 00:17:04.008 "trtype": "TCP", 00:17:04.008 "adrfam": "IPv4", 00:17:04.008 "traddr": "10.0.0.2", 00:17:04.008 "trsvcid": "4420" 00:17:04.008 }, 00:17:04.008 "peer_address": { 00:17:04.008 "trtype": "TCP", 00:17:04.008 "adrfam": "IPv4", 00:17:04.008 "traddr": "10.0.0.1", 00:17:04.008 "trsvcid": "41766" 00:17:04.008 }, 00:17:04.008 "auth": { 00:17:04.008 "state": "completed", 00:17:04.008 "digest": "sha384", 00:17:04.008 "dhgroup": "null" 00:17:04.008 } 00:17:04.008 } 00:17:04.008 ]' 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.008 15:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.267 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:04.267 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.853 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.112 15:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.371 00:17:05.371 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.371 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.371 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.371 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.371 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.371 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.371 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.630 { 00:17:05.630 "cntlid": 53, 00:17:05.630 "qid": 0, 00:17:05.630 "state": "enabled", 00:17:05.630 "thread": "nvmf_tgt_poll_group_000", 00:17:05.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.630 "listen_address": { 00:17:05.630 "trtype": "TCP", 00:17:05.630 "adrfam": "IPv4", 00:17:05.630 "traddr": "10.0.0.2", 00:17:05.630 "trsvcid": "4420" 00:17:05.630 }, 00:17:05.630 "peer_address": { 00:17:05.630 "trtype": "TCP", 00:17:05.630 "adrfam": "IPv4", 00:17:05.630 "traddr": "10.0.0.1", 00:17:05.630 "trsvcid": "35436" 00:17:05.630 }, 00:17:05.630 "auth": { 00:17:05.630 "state": "completed", 00:17:05.630 "digest": "sha384", 00:17:05.630 "dhgroup": "null" 00:17:05.630 } 00:17:05.630 } 00:17:05.630 ]' 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.630 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.889 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:05.889 15:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.455 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.713 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.713 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.972 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.972 { 00:17:06.972 "cntlid": 55, 00:17:06.972 "qid": 0, 00:17:06.973 "state": "enabled", 00:17:06.973 "thread": "nvmf_tgt_poll_group_000", 00:17:06.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.973 "listen_address": { 00:17:06.973 "trtype": "TCP", 00:17:06.973 "adrfam": "IPv4", 00:17:06.973 "traddr": "10.0.0.2", 00:17:06.973 "trsvcid": "4420" 00:17:06.973 }, 00:17:06.973 "peer_address": { 00:17:06.973 "trtype": "TCP", 00:17:06.973 "adrfam": "IPv4", 00:17:06.973 "traddr": "10.0.0.1", 00:17:06.973 "trsvcid": "35464" 00:17:06.973 }, 00:17:06.973 "auth": { 00:17:06.973 "state": "completed", 00:17:06.973 "digest": "sha384", 00:17:06.973 "dhgroup": "null" 00:17:06.973 } 00:17:06.973 } 00:17:06.973 ]' 00:17:06.973 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.236 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.236 15:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.236 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.236 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.236 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.236 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.236 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.615 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:07.615 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.874 15:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.133 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.392 00:17:08.392 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.392 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.392 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.651 { 00:17:08.651 "cntlid": 57, 00:17:08.651 "qid": 0, 00:17:08.651 "state": "enabled", 00:17:08.651 "thread": "nvmf_tgt_poll_group_000", 00:17:08.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:08.651 "listen_address": { 00:17:08.651 "trtype": "TCP", 00:17:08.651 "adrfam": "IPv4", 00:17:08.651 "traddr": "10.0.0.2", 00:17:08.651 "trsvcid": "4420" 00:17:08.651 }, 00:17:08.651 "peer_address": { 00:17:08.651 "trtype": "TCP", 00:17:08.651 "adrfam": "IPv4", 00:17:08.651 "traddr": "10.0.0.1", 00:17:08.651 "trsvcid": "35504" 00:17:08.651 }, 00:17:08.651 "auth": { 00:17:08.651 "state": "completed", 00:17:08.651 "digest": "sha384", 00:17:08.651 "dhgroup": "ffdhe2048" 00:17:08.651 } 00:17:08.651 } 00:17:08.651 ]' 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.651 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.652 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.911 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:08.911 15:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.477 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.734 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.735 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.735 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.735 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.735 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.992 00:17:09.992 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.992 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.992 15:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.251 { 00:17:10.251 "cntlid": 59, 00:17:10.251 "qid": 0, 00:17:10.251 "state": "enabled", 00:17:10.251 "thread": "nvmf_tgt_poll_group_000", 00:17:10.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.251 "listen_address": { 00:17:10.251 "trtype": "TCP", 00:17:10.251 "adrfam": "IPv4", 00:17:10.251 "traddr": "10.0.0.2", 00:17:10.251 "trsvcid": "4420" 00:17:10.251 }, 00:17:10.251 "peer_address": { 00:17:10.251 "trtype": "TCP", 00:17:10.251 "adrfam": "IPv4", 00:17:10.251 "traddr": "10.0.0.1", 00:17:10.251 "trsvcid": "35532" 00:17:10.251 }, 00:17:10.251 "auth": { 00:17:10.251 "state": "completed", 00:17:10.251 "digest": "sha384", 00:17:10.251 "dhgroup": "ffdhe2048" 00:17:10.251 } 00:17:10.251 } 00:17:10.251 ]' 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.251 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.510 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:10.510 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.076 15:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.334 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.592 00:17:11.593 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.593 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.593 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.850 { 00:17:11.850 "cntlid": 61, 00:17:11.850 "qid": 0, 00:17:11.850 "state": "enabled", 00:17:11.850 "thread": "nvmf_tgt_poll_group_000", 00:17:11.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.850 "listen_address": { 00:17:11.850 "trtype": "TCP", 00:17:11.850 "adrfam": "IPv4", 00:17:11.850 "traddr": "10.0.0.2", 00:17:11.850 "trsvcid": "4420" 00:17:11.850 }, 00:17:11.850 "peer_address": { 00:17:11.850 "trtype": "TCP", 00:17:11.850 "adrfam": "IPv4", 00:17:11.850 "traddr": "10.0.0.1", 00:17:11.850 "trsvcid": "35564" 00:17:11.850 }, 00:17:11.850 "auth": { 00:17:11.850 "state": "completed", 00:17:11.850 "digest": "sha384", 00:17:11.850 "dhgroup": "ffdhe2048" 00:17:11.850 } 00:17:11.850 } 00:17:11.850 ]' 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.850 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.851 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.851 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.109 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:12.109 15:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.676 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.934 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.934 00:17:13.192 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.192 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.192 15:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.192 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.192 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.192 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.192 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.192 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.192 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.192 { 00:17:13.192 "cntlid": 63, 00:17:13.192 "qid": 0, 00:17:13.192 "state": "enabled", 00:17:13.192 "thread": "nvmf_tgt_poll_group_000", 00:17:13.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:13.192 "listen_address": { 00:17:13.192 "trtype": "TCP", 00:17:13.192 "adrfam": "IPv4", 00:17:13.192 "traddr": "10.0.0.2", 00:17:13.192 "trsvcid": "4420" 00:17:13.192 }, 00:17:13.192 "peer_address": { 00:17:13.192 "trtype": "TCP", 00:17:13.192 "adrfam": "IPv4", 00:17:13.192 "traddr": "10.0.0.1", 00:17:13.192 "trsvcid": "35592" 00:17:13.192 }, 00:17:13.193 "auth": { 00:17:13.193 "state": "completed", 00:17:13.193 "digest": "sha384", 00:17:13.193 "dhgroup": "ffdhe2048" 00:17:13.193 } 00:17:13.193 } 00:17:13.193 ]' 00:17:13.193 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.193 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.193 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.452 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.452 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.452 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.452 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.452 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.711 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:13.711 15:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.277 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.278 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.278 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.536 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.794 { 00:17:14.794 "cntlid": 65, 00:17:14.794 "qid": 0, 00:17:14.794 "state": "enabled", 00:17:14.794 "thread": "nvmf_tgt_poll_group_000", 00:17:14.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:14.794 "listen_address": { 00:17:14.794 "trtype": "TCP", 00:17:14.794 "adrfam": "IPv4", 00:17:14.794 "traddr": "10.0.0.2", 00:17:14.794 "trsvcid": "4420" 00:17:14.794 }, 00:17:14.794 "peer_address": { 00:17:14.794 "trtype": "TCP", 00:17:14.794 "adrfam": "IPv4", 00:17:14.794 "traddr": "10.0.0.1", 00:17:14.794 "trsvcid": "54820" 00:17:14.794 }, 00:17:14.794 "auth": { 00:17:14.794 "state": "completed", 00:17:14.794 "digest": "sha384", 00:17:14.794 "dhgroup": "ffdhe3072" 00:17:14.794 } 00:17:14.794 } 00:17:14.794 ]' 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.794 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.052 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.052 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.052 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.052 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.052 15:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.311 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:15.311 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:15.878 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.878 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.878 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.878 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.878 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.879 15:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.138 00:17:16.138 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.138 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.138 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.397 { 00:17:16.397 "cntlid": 67, 00:17:16.397 "qid": 0, 00:17:16.397 "state": "enabled", 00:17:16.397 "thread": "nvmf_tgt_poll_group_000", 00:17:16.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.397 "listen_address": { 00:17:16.397 "trtype": "TCP", 00:17:16.397 "adrfam": "IPv4", 00:17:16.397 "traddr": "10.0.0.2", 00:17:16.397 "trsvcid": "4420" 00:17:16.397 }, 00:17:16.397 "peer_address": { 00:17:16.397 "trtype": "TCP", 00:17:16.397 "adrfam": "IPv4", 00:17:16.397 "traddr": "10.0.0.1", 00:17:16.397 "trsvcid": "54856" 00:17:16.397 }, 00:17:16.397 "auth": { 00:17:16.397 "state": "completed", 00:17:16.397 "digest": "sha384", 00:17:16.397 "dhgroup": "ffdhe3072" 00:17:16.397 } 00:17:16.397 } 00:17:16.397 ]' 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.397 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.656 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.656 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.656 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.656 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.656 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.916 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:16.916 15:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.482 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.741 00:17:17.741 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.741 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.741 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.999 { 00:17:17.999 "cntlid": 69, 00:17:17.999 "qid": 0, 00:17:17.999 "state": "enabled", 00:17:17.999 "thread": "nvmf_tgt_poll_group_000", 00:17:17.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.999 "listen_address": { 00:17:17.999 "trtype": "TCP", 00:17:17.999 "adrfam": "IPv4", 00:17:17.999 "traddr": "10.0.0.2", 00:17:17.999 "trsvcid": "4420" 00:17:17.999 }, 00:17:17.999 "peer_address": { 00:17:17.999 "trtype": "TCP", 00:17:17.999 "adrfam": "IPv4", 00:17:17.999 "traddr": "10.0.0.1", 00:17:17.999 "trsvcid": "54886" 00:17:17.999 }, 00:17:17.999 "auth": { 00:17:17.999 "state": "completed", 00:17:17.999 "digest": "sha384", 00:17:17.999 "dhgroup": "ffdhe3072" 00:17:17.999 } 00:17:17.999 } 00:17:17.999 ]' 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.999 15:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.257 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.257 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.257 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.257 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.257 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.516 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:18.516 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.083 15:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.083 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.340 00:17:19.340 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.340 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.340 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.598 { 00:17:19.598 "cntlid": 71, 00:17:19.598 "qid": 0, 00:17:19.598 "state": "enabled", 00:17:19.598 "thread": "nvmf_tgt_poll_group_000", 00:17:19.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.598 "listen_address": { 00:17:19.598 "trtype": "TCP", 00:17:19.598 "adrfam": "IPv4", 00:17:19.598 "traddr": "10.0.0.2", 00:17:19.598 "trsvcid": "4420" 00:17:19.598 }, 00:17:19.598 "peer_address": { 00:17:19.598 "trtype": "TCP", 00:17:19.598 "adrfam": "IPv4", 00:17:19.598 "traddr": "10.0.0.1", 00:17:19.598 "trsvcid": "54908" 00:17:19.598 }, 00:17:19.598 "auth": { 00:17:19.598 "state": "completed", 00:17:19.598 "digest": "sha384", 00:17:19.598 "dhgroup": "ffdhe3072" 00:17:19.598 } 00:17:19.598 } 00:17:19.598 ]' 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.598 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.856 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.856 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.856 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.856 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.856 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.856 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:19.856 15:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.422 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.681 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.940 00:17:20.940 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.940 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.940 15:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.200 { 00:17:21.200 "cntlid": 73, 00:17:21.200 "qid": 0, 00:17:21.200 "state": "enabled", 00:17:21.200 "thread": "nvmf_tgt_poll_group_000", 00:17:21.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.200 "listen_address": { 00:17:21.200 "trtype": "TCP", 00:17:21.200 "adrfam": "IPv4", 00:17:21.200 "traddr": "10.0.0.2", 00:17:21.200 "trsvcid": "4420" 00:17:21.200 }, 00:17:21.200 "peer_address": { 00:17:21.200 "trtype": "TCP", 00:17:21.200 "adrfam": "IPv4", 00:17:21.200 "traddr": "10.0.0.1", 00:17:21.200 "trsvcid": "54946" 00:17:21.200 }, 00:17:21.200 "auth": { 00:17:21.200 "state": "completed", 00:17:21.200 "digest": "sha384", 00:17:21.200 "dhgroup": "ffdhe4096" 00:17:21.200 } 00:17:21.200 } 00:17:21.200 ]' 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.200 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.459 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.459 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.459 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.459 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:21.459 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.029 15:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.288 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.547 00:17:22.547 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.547 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.547 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.805 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.805 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.805 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.805 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.805 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.805 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.805 { 00:17:22.805 "cntlid": 75, 00:17:22.805 "qid": 0, 00:17:22.805 "state": "enabled", 00:17:22.805 "thread": "nvmf_tgt_poll_group_000", 00:17:22.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:22.805 "listen_address": { 00:17:22.805 "trtype": "TCP", 00:17:22.805 "adrfam": "IPv4", 00:17:22.805 "traddr": "10.0.0.2", 00:17:22.805 "trsvcid": "4420" 00:17:22.805 }, 00:17:22.805 "peer_address": { 00:17:22.805 "trtype": "TCP", 00:17:22.805 "adrfam": "IPv4", 00:17:22.805 "traddr": "10.0.0.1", 00:17:22.805 "trsvcid": "54990" 00:17:22.805 }, 00:17:22.805 "auth": { 00:17:22.805 "state": "completed", 00:17:22.805 "digest": "sha384", 00:17:22.806 "dhgroup": "ffdhe4096" 00:17:22.806 } 00:17:22.806 } 00:17:22.806 ]' 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.806 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.064 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:23.064 15:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.632 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.891 15:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.150 00:17:24.150 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.150 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.150 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.409 { 00:17:24.409 "cntlid": 77, 00:17:24.409 "qid": 0, 00:17:24.409 "state": "enabled", 00:17:24.409 "thread": "nvmf_tgt_poll_group_000", 00:17:24.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.409 "listen_address": { 00:17:24.409 "trtype": "TCP", 00:17:24.409 "adrfam": "IPv4", 00:17:24.409 "traddr": "10.0.0.2", 00:17:24.409 "trsvcid": "4420" 00:17:24.409 }, 00:17:24.409 "peer_address": { 00:17:24.409 "trtype": "TCP", 00:17:24.409 "adrfam": "IPv4", 00:17:24.409 "traddr": "10.0.0.1", 00:17:24.409 "trsvcid": "55012" 00:17:24.409 }, 00:17:24.409 "auth": { 00:17:24.409 "state": "completed", 00:17:24.409 "digest": "sha384", 00:17:24.409 "dhgroup": "ffdhe4096" 00:17:24.409 } 00:17:24.409 } 00:17:24.409 ]' 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.409 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:24.667 15:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.236 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.495 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.754 00:17:25.754 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.754 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.754 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.012 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.012 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.012 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.012 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.012 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.013 { 00:17:26.013 "cntlid": 79, 00:17:26.013 "qid": 0, 00:17:26.013 "state": "enabled", 00:17:26.013 "thread": "nvmf_tgt_poll_group_000", 00:17:26.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.013 "listen_address": { 00:17:26.013 "trtype": "TCP", 00:17:26.013 "adrfam": "IPv4", 00:17:26.013 "traddr": "10.0.0.2", 00:17:26.013 "trsvcid": "4420" 00:17:26.013 }, 00:17:26.013 "peer_address": { 00:17:26.013 "trtype": "TCP", 00:17:26.013 "adrfam": "IPv4", 00:17:26.013 "traddr": "10.0.0.1", 00:17:26.013 "trsvcid": "59566" 00:17:26.013 }, 00:17:26.013 "auth": { 00:17:26.013 "state": "completed", 00:17:26.013 "digest": "sha384", 00:17:26.013 "dhgroup": "ffdhe4096" 00:17:26.013 } 00:17:26.013 } 00:17:26.013 ]' 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.013 15:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.271 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:26.271 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.840 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.099 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.100 15:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.358 00:17:27.358 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.358 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.358 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.616 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.616 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.616 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.616 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.616 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.616 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.616 { 00:17:27.616 "cntlid": 81, 00:17:27.616 "qid": 0, 00:17:27.616 "state": "enabled", 00:17:27.616 "thread": "nvmf_tgt_poll_group_000", 00:17:27.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.616 "listen_address": { 00:17:27.616 "trtype": "TCP", 00:17:27.616 "adrfam": "IPv4", 00:17:27.616 "traddr": "10.0.0.2", 00:17:27.616 "trsvcid": "4420" 00:17:27.616 }, 00:17:27.616 "peer_address": { 00:17:27.616 "trtype": "TCP", 00:17:27.616 "adrfam": "IPv4", 00:17:27.617 "traddr": "10.0.0.1", 00:17:27.617 "trsvcid": "59606" 00:17:27.617 }, 00:17:27.617 "auth": { 00:17:27.617 "state": "completed", 00:17:27.617 "digest": "sha384", 00:17:27.617 "dhgroup": "ffdhe6144" 00:17:27.617 } 00:17:27.617 } 00:17:27.617 ]' 00:17:27.617 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.617 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.617 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.617 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.617 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.875 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.875 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.875 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.875 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:27.875 15:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:28.442 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.442 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.442 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.442 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.701 15:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.270 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.270 { 00:17:29.270 "cntlid": 83, 00:17:29.270 "qid": 0, 00:17:29.270 "state": "enabled", 00:17:29.270 "thread": "nvmf_tgt_poll_group_000", 00:17:29.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.270 "listen_address": { 00:17:29.270 "trtype": "TCP", 00:17:29.270 "adrfam": "IPv4", 00:17:29.270 "traddr": "10.0.0.2", 00:17:29.270 "trsvcid": "4420" 00:17:29.270 }, 00:17:29.270 "peer_address": { 00:17:29.270 "trtype": "TCP", 00:17:29.270 "adrfam": "IPv4", 00:17:29.270 "traddr": "10.0.0.1", 00:17:29.270 "trsvcid": "59622" 00:17:29.270 }, 00:17:29.270 "auth": { 00:17:29.270 "state": "completed", 00:17:29.270 "digest": "sha384", 00:17:29.270 "dhgroup": "ffdhe6144" 00:17:29.270 } 00:17:29.270 } 00:17:29.270 ]' 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.270 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.528 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.528 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.528 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.528 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.528 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.528 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:29.528 15:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:30.093 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.350 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.351 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.918 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.918 { 00:17:30.918 "cntlid": 85, 00:17:30.918 "qid": 0, 00:17:30.918 "state": "enabled", 00:17:30.918 "thread": "nvmf_tgt_poll_group_000", 00:17:30.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:30.918 "listen_address": { 00:17:30.918 "trtype": "TCP", 00:17:30.918 "adrfam": "IPv4", 00:17:30.918 "traddr": "10.0.0.2", 00:17:30.918 "trsvcid": "4420" 00:17:30.918 }, 00:17:30.918 "peer_address": { 00:17:30.918 "trtype": "TCP", 00:17:30.918 "adrfam": "IPv4", 00:17:30.918 "traddr": "10.0.0.1", 00:17:30.918 "trsvcid": "59640" 00:17:30.918 }, 00:17:30.918 "auth": { 00:17:30.918 "state": "completed", 00:17:30.918 "digest": "sha384", 00:17:30.918 "dhgroup": "ffdhe6144" 00:17:30.918 } 00:17:30.918 } 00:17:30.918 ]' 00:17:30.918 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.176 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.176 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.176 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.176 15:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.176 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.176 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.177 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.435 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:31.435 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.002 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.003 15:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.568 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.568 { 00:17:32.568 "cntlid": 87, 00:17:32.568 "qid": 0, 00:17:32.568 "state": "enabled", 00:17:32.568 "thread": "nvmf_tgt_poll_group_000", 00:17:32.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.568 "listen_address": { 00:17:32.568 "trtype": "TCP", 00:17:32.568 "adrfam": "IPv4", 00:17:32.568 "traddr": "10.0.0.2", 00:17:32.568 "trsvcid": "4420" 00:17:32.568 }, 00:17:32.568 "peer_address": { 00:17:32.568 "trtype": "TCP", 00:17:32.568 "adrfam": "IPv4", 00:17:32.568 "traddr": "10.0.0.1", 00:17:32.568 "trsvcid": "59666" 00:17:32.568 }, 00:17:32.568 "auth": { 00:17:32.568 "state": "completed", 00:17:32.568 "digest": "sha384", 00:17:32.568 "dhgroup": "ffdhe6144" 00:17:32.568 } 00:17:32.568 } 00:17:32.568 ]' 00:17:32.568 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.826 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.826 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.826 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.826 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.826 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.826 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.826 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.084 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:33.084 15:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.650 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.651 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.910 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.910 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.910 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.910 15:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.169 00:17:34.169 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.169 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.169 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.428 { 00:17:34.428 "cntlid": 89, 00:17:34.428 "qid": 0, 00:17:34.428 "state": "enabled", 00:17:34.428 "thread": "nvmf_tgt_poll_group_000", 00:17:34.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:34.428 "listen_address": { 00:17:34.428 "trtype": "TCP", 00:17:34.428 "adrfam": "IPv4", 00:17:34.428 "traddr": "10.0.0.2", 00:17:34.428 "trsvcid": "4420" 00:17:34.428 }, 00:17:34.428 "peer_address": { 00:17:34.428 "trtype": "TCP", 00:17:34.428 "adrfam": "IPv4", 00:17:34.428 "traddr": "10.0.0.1", 00:17:34.428 "trsvcid": "59680" 00:17:34.428 }, 00:17:34.428 "auth": { 00:17:34.428 "state": "completed", 00:17:34.428 "digest": "sha384", 00:17:34.428 "dhgroup": "ffdhe8192" 00:17:34.428 } 00:17:34.428 } 00:17:34.428 ]' 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.428 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.691 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.691 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.691 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.691 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.691 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.691 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:34.691 15:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.330 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.589 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.156 00:17:36.156 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.156 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.156 15:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.414 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.414 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.414 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.414 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.414 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.414 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.414 { 00:17:36.414 "cntlid": 91, 00:17:36.414 "qid": 0, 00:17:36.414 "state": "enabled", 00:17:36.414 "thread": "nvmf_tgt_poll_group_000", 00:17:36.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:36.414 "listen_address": { 00:17:36.414 "trtype": "TCP", 00:17:36.414 "adrfam": "IPv4", 00:17:36.414 "traddr": "10.0.0.2", 00:17:36.414 "trsvcid": "4420" 00:17:36.414 }, 00:17:36.414 "peer_address": { 00:17:36.414 "trtype": "TCP", 00:17:36.414 "adrfam": "IPv4", 00:17:36.415 "traddr": "10.0.0.1", 00:17:36.415 "trsvcid": "43642" 00:17:36.415 }, 00:17:36.415 "auth": { 00:17:36.415 "state": "completed", 00:17:36.415 "digest": "sha384", 00:17:36.415 "dhgroup": "ffdhe8192" 00:17:36.415 } 00:17:36.415 } 00:17:36.415 ]' 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.415 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.672 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:36.672 15:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.238 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.503 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.069 00:17:38.069 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.069 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.069 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.069 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.070 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.070 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.070 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.070 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.070 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.070 { 00:17:38.070 "cntlid": 93, 00:17:38.070 "qid": 0, 00:17:38.070 "state": "enabled", 00:17:38.070 "thread": "nvmf_tgt_poll_group_000", 00:17:38.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:38.070 "listen_address": { 00:17:38.070 "trtype": "TCP", 00:17:38.070 "adrfam": "IPv4", 00:17:38.070 "traddr": "10.0.0.2", 00:17:38.070 "trsvcid": "4420" 00:17:38.070 }, 00:17:38.070 "peer_address": { 00:17:38.070 "trtype": "TCP", 00:17:38.070 "adrfam": "IPv4", 00:17:38.070 "traddr": "10.0.0.1", 00:17:38.070 "trsvcid": "43670" 00:17:38.070 }, 00:17:38.070 "auth": { 00:17:38.070 "state": "completed", 00:17:38.070 "digest": "sha384", 00:17:38.070 "dhgroup": "ffdhe8192" 00:17:38.070 } 00:17:38.070 } 00:17:38.070 ]' 00:17:38.070 15:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.070 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.070 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.328 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.328 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.328 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.328 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.328 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.328 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:38.328 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.895 15:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.155 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.719 00:17:39.719 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.719 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.719 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.977 { 00:17:39.977 "cntlid": 95, 00:17:39.977 "qid": 0, 00:17:39.977 "state": "enabled", 00:17:39.977 "thread": "nvmf_tgt_poll_group_000", 00:17:39.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:39.977 "listen_address": { 00:17:39.977 "trtype": "TCP", 00:17:39.977 "adrfam": "IPv4", 00:17:39.977 "traddr": "10.0.0.2", 00:17:39.977 "trsvcid": "4420" 00:17:39.977 }, 00:17:39.977 "peer_address": { 00:17:39.977 "trtype": "TCP", 00:17:39.977 "adrfam": "IPv4", 00:17:39.977 "traddr": "10.0.0.1", 00:17:39.977 "trsvcid": "43700" 00:17:39.977 }, 00:17:39.977 "auth": { 00:17:39.977 "state": "completed", 00:17:39.977 "digest": "sha384", 00:17:39.977 "dhgroup": "ffdhe8192" 00:17:39.977 } 00:17:39.977 } 00:17:39.977 ]' 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.977 15:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.235 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:40.235 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.803 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.062 15:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.321 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.321 { 00:17:41.321 "cntlid": 97, 00:17:41.321 "qid": 0, 00:17:41.321 "state": "enabled", 00:17:41.321 "thread": "nvmf_tgt_poll_group_000", 00:17:41.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:41.321 "listen_address": { 00:17:41.321 "trtype": "TCP", 00:17:41.321 "adrfam": "IPv4", 00:17:41.321 "traddr": "10.0.0.2", 00:17:41.321 "trsvcid": "4420" 00:17:41.321 }, 00:17:41.321 "peer_address": { 00:17:41.321 "trtype": "TCP", 00:17:41.321 "adrfam": "IPv4", 00:17:41.321 "traddr": "10.0.0.1", 00:17:41.321 "trsvcid": "43712" 00:17:41.321 }, 00:17:41.321 "auth": { 00:17:41.321 "state": "completed", 00:17:41.321 "digest": "sha512", 00:17:41.321 "dhgroup": "null" 00:17:41.321 } 00:17:41.321 } 00:17:41.321 ]' 00:17:41.321 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.580 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.580 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.580 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.580 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.580 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.580 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.580 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.839 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:41.839 15:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.407 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.665 00:17:42.665 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.665 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.665 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.924 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.924 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.924 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.924 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.924 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.924 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.924 { 00:17:42.924 "cntlid": 99, 00:17:42.924 "qid": 0, 00:17:42.924 "state": "enabled", 00:17:42.924 "thread": "nvmf_tgt_poll_group_000", 00:17:42.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.924 "listen_address": { 00:17:42.924 "trtype": "TCP", 00:17:42.924 "adrfam": "IPv4", 00:17:42.924 "traddr": "10.0.0.2", 00:17:42.924 "trsvcid": "4420" 00:17:42.924 }, 00:17:42.924 "peer_address": { 00:17:42.924 "trtype": "TCP", 00:17:42.924 "adrfam": "IPv4", 00:17:42.924 "traddr": "10.0.0.1", 00:17:42.924 "trsvcid": "43726" 00:17:42.924 }, 00:17:42.924 "auth": { 00:17:42.924 "state": "completed", 00:17:42.924 "digest": "sha512", 00:17:42.924 "dhgroup": "null" 00:17:42.924 } 00:17:42.924 } 00:17:42.925 ]' 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.925 15:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.183 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:43.183 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.751 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.010 15:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.269 00:17:44.269 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.269 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.269 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.528 { 00:17:44.528 "cntlid": 101, 00:17:44.528 "qid": 0, 00:17:44.528 "state": "enabled", 00:17:44.528 "thread": "nvmf_tgt_poll_group_000", 00:17:44.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:44.528 "listen_address": { 00:17:44.528 "trtype": "TCP", 00:17:44.528 "adrfam": "IPv4", 00:17:44.528 "traddr": "10.0.0.2", 00:17:44.528 "trsvcid": "4420" 00:17:44.528 }, 00:17:44.528 "peer_address": { 00:17:44.528 "trtype": "TCP", 00:17:44.528 "adrfam": "IPv4", 00:17:44.528 "traddr": "10.0.0.1", 00:17:44.528 "trsvcid": "56716" 00:17:44.528 }, 00:17:44.528 "auth": { 00:17:44.528 "state": "completed", 00:17:44.528 "digest": "sha512", 00:17:44.528 "dhgroup": "null" 00:17:44.528 } 00:17:44.528 } 00:17:44.528 ]' 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.528 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.787 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:44.787 15:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.354 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.613 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.872 00:17:45.872 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.872 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.872 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.131 { 00:17:46.131 "cntlid": 103, 00:17:46.131 "qid": 0, 00:17:46.131 "state": "enabled", 00:17:46.131 "thread": "nvmf_tgt_poll_group_000", 00:17:46.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.131 "listen_address": { 00:17:46.131 "trtype": "TCP", 00:17:46.131 "adrfam": "IPv4", 00:17:46.131 "traddr": "10.0.0.2", 00:17:46.131 "trsvcid": "4420" 00:17:46.131 }, 00:17:46.131 "peer_address": { 00:17:46.131 "trtype": "TCP", 00:17:46.131 "adrfam": "IPv4", 00:17:46.131 "traddr": "10.0.0.1", 00:17:46.131 "trsvcid": "56744" 00:17:46.131 }, 00:17:46.131 "auth": { 00:17:46.131 "state": "completed", 00:17:46.131 "digest": "sha512", 00:17:46.131 "dhgroup": "null" 00:17:46.131 } 00:17:46.131 } 00:17:46.131 ]' 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:46.131 15:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.131 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.131 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.131 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.390 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:46.390 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.956 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.215 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:47.215 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.215 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.215 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.215 15:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.215 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.474 00:17:47.474 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.474 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.474 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.474 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.474 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.474 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.474 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.733 { 00:17:47.733 "cntlid": 105, 00:17:47.733 "qid": 0, 00:17:47.733 "state": "enabled", 00:17:47.733 "thread": "nvmf_tgt_poll_group_000", 00:17:47.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:47.733 "listen_address": { 00:17:47.733 "trtype": "TCP", 00:17:47.733 "adrfam": "IPv4", 00:17:47.733 "traddr": "10.0.0.2", 00:17:47.733 "trsvcid": "4420" 00:17:47.733 }, 00:17:47.733 "peer_address": { 00:17:47.733 "trtype": "TCP", 00:17:47.733 "adrfam": "IPv4", 00:17:47.733 "traddr": "10.0.0.1", 00:17:47.733 "trsvcid": "56764" 00:17:47.733 }, 00:17:47.733 "auth": { 00:17:47.733 "state": "completed", 00:17:47.733 "digest": "sha512", 00:17:47.733 "dhgroup": "ffdhe2048" 00:17:47.733 } 00:17:47.733 } 00:17:47.733 ]' 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.733 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.992 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:47.992 15:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.558 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.817 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.075 00:17:49.075 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.075 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.075 15:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.075 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.075 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.075 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.075 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.075 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.075 { 00:17:49.075 "cntlid": 107, 00:17:49.075 "qid": 0, 00:17:49.075 "state": "enabled", 00:17:49.075 "thread": "nvmf_tgt_poll_group_000", 00:17:49.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:49.075 "listen_address": { 00:17:49.075 "trtype": "TCP", 00:17:49.075 "adrfam": "IPv4", 00:17:49.075 "traddr": "10.0.0.2", 00:17:49.075 "trsvcid": "4420" 00:17:49.075 }, 00:17:49.075 "peer_address": { 00:17:49.075 "trtype": "TCP", 00:17:49.076 "adrfam": "IPv4", 00:17:49.076 "traddr": "10.0.0.1", 00:17:49.076 "trsvcid": "56796" 00:17:49.076 }, 00:17:49.076 "auth": { 00:17:49.076 "state": "completed", 00:17:49.076 "digest": "sha512", 00:17:49.076 "dhgroup": "ffdhe2048" 00:17:49.076 } 00:17:49.076 } 00:17:49.076 ]' 00:17:49.076 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.356 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.613 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:49.613 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:50.179 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.179 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:50.180 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.180 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.180 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.180 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.180 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.180 15:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.438 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.438 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.696 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.696 { 00:17:50.696 "cntlid": 109, 00:17:50.696 "qid": 0, 00:17:50.696 "state": "enabled", 00:17:50.696 "thread": "nvmf_tgt_poll_group_000", 00:17:50.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.696 "listen_address": { 00:17:50.696 "trtype": "TCP", 00:17:50.696 "adrfam": "IPv4", 00:17:50.696 "traddr": "10.0.0.2", 00:17:50.696 "trsvcid": "4420" 00:17:50.696 }, 00:17:50.696 "peer_address": { 00:17:50.696 "trtype": "TCP", 00:17:50.696 "adrfam": "IPv4", 00:17:50.696 "traddr": "10.0.0.1", 00:17:50.697 "trsvcid": "56822" 00:17:50.697 }, 00:17:50.697 "auth": { 00:17:50.697 "state": "completed", 00:17:50.697 "digest": "sha512", 00:17:50.697 "dhgroup": "ffdhe2048" 00:17:50.697 } 00:17:50.697 } 00:17:50.697 ]' 00:17:50.697 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.956 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.956 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.956 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.956 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.956 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.956 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.956 15:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.215 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:51.215 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.625 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.883 15:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.142 00:17:52.142 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.142 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.142 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.399 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.399 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.399 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.400 { 00:17:52.400 "cntlid": 111, 00:17:52.400 "qid": 0, 00:17:52.400 "state": "enabled", 00:17:52.400 "thread": "nvmf_tgt_poll_group_000", 00:17:52.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:52.400 "listen_address": { 00:17:52.400 "trtype": "TCP", 00:17:52.400 "adrfam": "IPv4", 00:17:52.400 "traddr": "10.0.0.2", 00:17:52.400 "trsvcid": "4420" 00:17:52.400 }, 00:17:52.400 "peer_address": { 00:17:52.400 "trtype": "TCP", 00:17:52.400 "adrfam": "IPv4", 00:17:52.400 "traddr": "10.0.0.1", 00:17:52.400 "trsvcid": "56848" 00:17:52.400 }, 00:17:52.400 "auth": { 00:17:52.400 "state": "completed", 00:17:52.400 "digest": "sha512", 00:17:52.400 "dhgroup": "ffdhe2048" 00:17:52.400 } 00:17:52.400 } 00:17:52.400 ]' 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.400 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.657 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:52.657 15:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.223 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.481 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.739 00:17:53.739 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.739 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.739 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.996 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.996 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.996 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.996 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.996 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.996 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.996 { 00:17:53.996 "cntlid": 113, 00:17:53.996 "qid": 0, 00:17:53.996 "state": "enabled", 00:17:53.996 "thread": "nvmf_tgt_poll_group_000", 00:17:53.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:53.996 "listen_address": { 00:17:53.996 "trtype": "TCP", 00:17:53.996 "adrfam": "IPv4", 00:17:53.996 "traddr": "10.0.0.2", 00:17:53.996 "trsvcid": "4420" 00:17:53.996 }, 00:17:53.996 "peer_address": { 00:17:53.996 "trtype": "TCP", 00:17:53.996 "adrfam": "IPv4", 00:17:53.996 "traddr": "10.0.0.1", 00:17:53.996 "trsvcid": "56864" 00:17:53.996 }, 00:17:53.996 "auth": { 00:17:53.996 "state": "completed", 00:17:53.996 "digest": "sha512", 00:17:53.996 "dhgroup": "ffdhe3072" 00:17:53.997 } 00:17:53.997 } 00:17:53.997 ]' 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.997 15:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.255 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:54.255 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:54.821 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.079 15:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.336 00:17:55.336 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.336 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.336 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.594 { 00:17:55.594 "cntlid": 115, 00:17:55.594 "qid": 0, 00:17:55.594 "state": "enabled", 00:17:55.594 "thread": "nvmf_tgt_poll_group_000", 00:17:55.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:55.594 "listen_address": { 00:17:55.594 "trtype": "TCP", 00:17:55.594 "adrfam": "IPv4", 00:17:55.594 "traddr": "10.0.0.2", 00:17:55.594 "trsvcid": "4420" 00:17:55.594 }, 00:17:55.594 "peer_address": { 00:17:55.594 "trtype": "TCP", 00:17:55.594 "adrfam": "IPv4", 00:17:55.594 "traddr": "10.0.0.1", 00:17:55.594 "trsvcid": "41734" 00:17:55.594 }, 00:17:55.594 "auth": { 00:17:55.594 "state": "completed", 00:17:55.594 "digest": "sha512", 00:17:55.594 "dhgroup": "ffdhe3072" 00:17:55.594 } 00:17:55.594 } 00:17:55.594 ]' 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.594 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.852 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:55.852 15:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.420 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.679 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.938 00:17:56.938 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.938 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.938 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.197 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.197 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.197 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.197 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.197 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.197 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.197 { 00:17:57.197 "cntlid": 117, 00:17:57.197 "qid": 0, 00:17:57.197 "state": "enabled", 00:17:57.197 "thread": "nvmf_tgt_poll_group_000", 00:17:57.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:57.197 "listen_address": { 00:17:57.197 "trtype": "TCP", 00:17:57.197 "adrfam": "IPv4", 00:17:57.197 "traddr": "10.0.0.2", 00:17:57.197 "trsvcid": "4420" 00:17:57.197 }, 00:17:57.197 "peer_address": { 00:17:57.197 "trtype": "TCP", 00:17:57.197 "adrfam": "IPv4", 00:17:57.197 "traddr": "10.0.0.1", 00:17:57.197 "trsvcid": "41764" 00:17:57.197 }, 00:17:57.197 "auth": { 00:17:57.197 "state": "completed", 00:17:57.197 "digest": "sha512", 00:17:57.197 "dhgroup": "ffdhe3072" 00:17:57.197 } 00:17:57.197 } 00:17:57.197 ]' 00:17:57.197 15:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.197 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.197 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.197 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.197 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.197 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.197 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.197 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.456 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:57.456 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.024 15:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.283 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.542 00:17:58.542 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.542 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.542 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.800 { 00:17:58.800 "cntlid": 119, 00:17:58.800 "qid": 0, 00:17:58.800 "state": "enabled", 00:17:58.800 "thread": "nvmf_tgt_poll_group_000", 00:17:58.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:58.800 "listen_address": { 00:17:58.800 "trtype": "TCP", 00:17:58.800 "adrfam": "IPv4", 00:17:58.800 "traddr": "10.0.0.2", 00:17:58.800 "trsvcid": "4420" 00:17:58.800 }, 00:17:58.800 "peer_address": { 00:17:58.800 "trtype": "TCP", 00:17:58.800 "adrfam": "IPv4", 00:17:58.800 "traddr": "10.0.0.1", 00:17:58.800 "trsvcid": "41788" 00:17:58.800 }, 00:17:58.800 "auth": { 00:17:58.800 "state": "completed", 00:17:58.800 "digest": "sha512", 00:17:58.800 "dhgroup": "ffdhe3072" 00:17:58.800 } 00:17:58.800 } 00:17:58.800 ]' 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.800 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.059 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:59.059 15:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.626 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.884 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.143 00:18:00.143 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.143 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.143 15:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.402 { 00:18:00.402 "cntlid": 121, 00:18:00.402 "qid": 0, 00:18:00.402 "state": "enabled", 00:18:00.402 "thread": "nvmf_tgt_poll_group_000", 00:18:00.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:00.402 "listen_address": { 00:18:00.402 "trtype": "TCP", 00:18:00.402 "adrfam": "IPv4", 00:18:00.402 "traddr": "10.0.0.2", 00:18:00.402 "trsvcid": "4420" 00:18:00.402 }, 00:18:00.402 "peer_address": { 00:18:00.402 "trtype": "TCP", 00:18:00.402 "adrfam": "IPv4", 00:18:00.402 "traddr": "10.0.0.1", 00:18:00.402 "trsvcid": "41828" 00:18:00.402 }, 00:18:00.402 "auth": { 00:18:00.402 "state": "completed", 00:18:00.402 "digest": "sha512", 00:18:00.402 "dhgroup": "ffdhe4096" 00:18:00.402 } 00:18:00.402 } 00:18:00.402 ]' 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.402 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.661 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:00.661 15:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.229 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.488 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.746 00:18:01.747 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.747 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.747 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.747 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.747 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.747 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.747 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.005 { 00:18:02.005 "cntlid": 123, 00:18:02.005 "qid": 0, 00:18:02.005 "state": "enabled", 00:18:02.005 "thread": "nvmf_tgt_poll_group_000", 00:18:02.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.005 "listen_address": { 00:18:02.005 "trtype": "TCP", 00:18:02.005 "adrfam": "IPv4", 00:18:02.005 "traddr": "10.0.0.2", 00:18:02.005 "trsvcid": "4420" 00:18:02.005 }, 00:18:02.005 "peer_address": { 00:18:02.005 "trtype": "TCP", 00:18:02.005 "adrfam": "IPv4", 00:18:02.005 "traddr": "10.0.0.1", 00:18:02.005 "trsvcid": "41854" 00:18:02.005 }, 00:18:02.005 "auth": { 00:18:02.005 "state": "completed", 00:18:02.005 "digest": "sha512", 00:18:02.005 "dhgroup": "ffdhe4096" 00:18:02.005 } 00:18:02.005 } 00:18:02.005 ]' 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.005 15:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.264 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:18:02.264 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.831 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.090 15:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.349 00:18:03.349 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.349 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.349 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.349 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.349 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.349 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.349 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.608 { 00:18:03.608 "cntlid": 125, 00:18:03.608 "qid": 0, 00:18:03.608 "state": "enabled", 00:18:03.608 "thread": "nvmf_tgt_poll_group_000", 00:18:03.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:03.608 "listen_address": { 00:18:03.608 "trtype": "TCP", 00:18:03.608 "adrfam": "IPv4", 00:18:03.608 "traddr": "10.0.0.2", 00:18:03.608 "trsvcid": "4420" 00:18:03.608 }, 00:18:03.608 "peer_address": { 00:18:03.608 "trtype": "TCP", 00:18:03.608 "adrfam": "IPv4", 00:18:03.608 "traddr": "10.0.0.1", 00:18:03.608 "trsvcid": "41888" 00:18:03.608 }, 00:18:03.608 "auth": { 00:18:03.608 "state": "completed", 00:18:03.608 "digest": "sha512", 00:18:03.608 "dhgroup": "ffdhe4096" 00:18:03.608 } 00:18:03.608 } 00:18:03.608 ]' 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.608 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.867 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:18:03.867 15:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:04.434 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.693 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.952 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.952 { 00:18:04.952 "cntlid": 127, 00:18:04.952 "qid": 0, 00:18:04.952 "state": "enabled", 00:18:04.952 "thread": "nvmf_tgt_poll_group_000", 00:18:04.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:04.952 "listen_address": { 00:18:04.952 "trtype": "TCP", 00:18:04.952 "adrfam": "IPv4", 00:18:04.952 "traddr": "10.0.0.2", 00:18:04.952 "trsvcid": "4420" 00:18:04.952 }, 00:18:04.952 "peer_address": { 00:18:04.952 "trtype": "TCP", 00:18:04.952 "adrfam": "IPv4", 00:18:04.952 "traddr": "10.0.0.1", 00:18:04.952 "trsvcid": "34994" 00:18:04.952 }, 00:18:04.952 "auth": { 00:18:04.952 "state": "completed", 00:18:04.952 "digest": "sha512", 00:18:04.952 "dhgroup": "ffdhe4096" 00:18:04.952 } 00:18:04.952 } 00:18:04.952 ]' 00:18:04.952 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.211 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.211 15:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.211 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.211 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.211 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.211 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.211 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.470 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:05.470 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.042 15:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.042 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.610 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.610 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.610 { 00:18:06.610 "cntlid": 129, 00:18:06.610 "qid": 0, 00:18:06.610 "state": "enabled", 00:18:06.610 "thread": "nvmf_tgt_poll_group_000", 00:18:06.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:06.610 "listen_address": { 00:18:06.610 "trtype": "TCP", 00:18:06.610 "adrfam": "IPv4", 00:18:06.610 "traddr": "10.0.0.2", 00:18:06.610 "trsvcid": "4420" 00:18:06.610 }, 00:18:06.610 "peer_address": { 00:18:06.611 "trtype": "TCP", 00:18:06.611 "adrfam": "IPv4", 00:18:06.611 "traddr": "10.0.0.1", 00:18:06.611 "trsvcid": "35024" 00:18:06.611 }, 00:18:06.611 "auth": { 00:18:06.611 "state": "completed", 00:18:06.611 "digest": "sha512", 00:18:06.611 "dhgroup": "ffdhe6144" 00:18:06.611 } 00:18:06.611 } 00:18:06.611 ]' 00:18:06.611 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.869 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.869 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.869 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.869 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.869 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.869 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.869 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.128 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:07.128 15:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.695 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.954 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.954 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.954 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.954 15:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.213 00:18:08.213 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.213 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.213 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.472 { 00:18:08.472 "cntlid": 131, 00:18:08.472 "qid": 0, 00:18:08.472 "state": "enabled", 00:18:08.472 "thread": "nvmf_tgt_poll_group_000", 00:18:08.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.472 "listen_address": { 00:18:08.472 "trtype": "TCP", 00:18:08.472 "adrfam": "IPv4", 00:18:08.472 "traddr": "10.0.0.2", 00:18:08.472 "trsvcid": "4420" 00:18:08.472 }, 00:18:08.472 "peer_address": { 00:18:08.472 "trtype": "TCP", 00:18:08.472 "adrfam": "IPv4", 00:18:08.472 "traddr": "10.0.0.1", 00:18:08.472 "trsvcid": "35064" 00:18:08.472 }, 00:18:08.472 "auth": { 00:18:08.472 "state": "completed", 00:18:08.472 "digest": "sha512", 00:18:08.472 "dhgroup": "ffdhe6144" 00:18:08.472 } 00:18:08.472 } 00:18:08.472 ]' 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.472 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.731 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:18:08.731 15:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.297 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.555 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.813 00:18:09.813 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.813 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.813 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.071 { 00:18:10.071 "cntlid": 133, 00:18:10.071 "qid": 0, 00:18:10.071 "state": "enabled", 00:18:10.071 "thread": "nvmf_tgt_poll_group_000", 00:18:10.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.071 "listen_address": { 00:18:10.071 "trtype": "TCP", 00:18:10.071 "adrfam": "IPv4", 00:18:10.071 "traddr": "10.0.0.2", 00:18:10.071 "trsvcid": "4420" 00:18:10.071 }, 00:18:10.071 "peer_address": { 00:18:10.071 "trtype": "TCP", 00:18:10.071 "adrfam": "IPv4", 00:18:10.071 "traddr": "10.0.0.1", 00:18:10.071 "trsvcid": "35074" 00:18:10.071 }, 00:18:10.071 "auth": { 00:18:10.071 "state": "completed", 00:18:10.071 "digest": "sha512", 00:18:10.071 "dhgroup": "ffdhe6144" 00:18:10.071 } 00:18:10.071 } 00:18:10.071 ]' 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.071 15:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.071 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.071 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.329 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.329 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.329 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.329 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:18:10.329 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:18:10.894 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.894 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:10.894 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.895 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.895 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.895 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.895 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:10.895 15:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.152 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.410 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.668 { 00:18:11.668 "cntlid": 135, 00:18:11.668 "qid": 0, 00:18:11.668 "state": "enabled", 00:18:11.668 "thread": "nvmf_tgt_poll_group_000", 00:18:11.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:11.668 "listen_address": { 00:18:11.668 "trtype": "TCP", 00:18:11.668 "adrfam": "IPv4", 00:18:11.668 "traddr": "10.0.0.2", 00:18:11.668 "trsvcid": "4420" 00:18:11.668 }, 00:18:11.668 "peer_address": { 00:18:11.668 "trtype": "TCP", 00:18:11.668 "adrfam": "IPv4", 00:18:11.668 "traddr": "10.0.0.1", 00:18:11.668 "trsvcid": "35098" 00:18:11.668 }, 00:18:11.668 "auth": { 00:18:11.668 "state": "completed", 00:18:11.668 "digest": "sha512", 00:18:11.668 "dhgroup": "ffdhe6144" 00:18:11.668 } 00:18:11.668 } 00:18:11.668 ]' 00:18:11.668 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.925 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.925 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.925 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.925 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.925 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.925 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.925 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.183 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:12.183 15:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.749 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.008 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.008 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.008 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.008 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.008 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.008 15:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.265 00:18:13.265 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.265 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.265 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.523 { 00:18:13.523 "cntlid": 137, 00:18:13.523 "qid": 0, 00:18:13.523 "state": "enabled", 00:18:13.523 "thread": "nvmf_tgt_poll_group_000", 00:18:13.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:13.523 "listen_address": { 00:18:13.523 "trtype": "TCP", 00:18:13.523 "adrfam": "IPv4", 00:18:13.523 "traddr": "10.0.0.2", 00:18:13.523 "trsvcid": "4420" 00:18:13.523 }, 00:18:13.523 "peer_address": { 00:18:13.523 "trtype": "TCP", 00:18:13.523 "adrfam": "IPv4", 00:18:13.523 "traddr": "10.0.0.1", 00:18:13.523 "trsvcid": "35138" 00:18:13.523 }, 00:18:13.523 "auth": { 00:18:13.523 "state": "completed", 00:18:13.523 "digest": "sha512", 00:18:13.523 "dhgroup": "ffdhe8192" 00:18:13.523 } 00:18:13.523 } 00:18:13.523 ]' 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.523 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.524 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.782 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.782 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.782 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.782 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.782 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.039 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:14.039 15:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.605 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.864 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.864 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.864 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.864 15:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.122 00:18:15.122 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.122 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.122 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.380 { 00:18:15.380 "cntlid": 139, 00:18:15.380 "qid": 0, 00:18:15.380 "state": "enabled", 00:18:15.380 "thread": "nvmf_tgt_poll_group_000", 00:18:15.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:15.380 "listen_address": { 00:18:15.380 "trtype": "TCP", 00:18:15.380 "adrfam": "IPv4", 00:18:15.380 "traddr": "10.0.0.2", 00:18:15.380 "trsvcid": "4420" 00:18:15.380 }, 00:18:15.380 "peer_address": { 00:18:15.380 "trtype": "TCP", 00:18:15.380 "adrfam": "IPv4", 00:18:15.380 "traddr": "10.0.0.1", 00:18:15.380 "trsvcid": "40262" 00:18:15.380 }, 00:18:15.380 "auth": { 00:18:15.380 "state": "completed", 00:18:15.380 "digest": "sha512", 00:18:15.380 "dhgroup": "ffdhe8192" 00:18:15.380 } 00:18:15.380 } 00:18:15.380 ]' 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.380 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.639 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.639 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.639 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.639 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.639 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.897 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:18:15.897 15:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: --dhchap-ctrl-secret DHHC-1:02:OTJlZDEwMGQyZmQwZWYzOTQ3NDVlM2Y4MDEwNzUzOTExMDdlMDY5NjcwNTA1MGIyKOCqNA==: 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.466 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.033 00:18:17.033 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.033 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.033 15:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.291 { 00:18:17.291 "cntlid": 141, 00:18:17.291 "qid": 0, 00:18:17.291 "state": "enabled", 00:18:17.291 "thread": "nvmf_tgt_poll_group_000", 00:18:17.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:17.291 "listen_address": { 00:18:17.291 "trtype": "TCP", 00:18:17.291 "adrfam": "IPv4", 00:18:17.291 "traddr": "10.0.0.2", 00:18:17.291 "trsvcid": "4420" 00:18:17.291 }, 00:18:17.291 "peer_address": { 00:18:17.291 "trtype": "TCP", 00:18:17.291 "adrfam": "IPv4", 00:18:17.291 "traddr": "10.0.0.1", 00:18:17.291 "trsvcid": "40286" 00:18:17.291 }, 00:18:17.291 "auth": { 00:18:17.291 "state": "completed", 00:18:17.291 "digest": "sha512", 00:18:17.291 "dhgroup": "ffdhe8192" 00:18:17.291 } 00:18:17.291 } 00:18:17.291 ]' 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.291 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.550 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:18:17.550 15:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:01:MTJhNzUzMzdiY2U0ZDk0MGUxZTE1ZDUwMWMwM2E0ODmYV797: 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.118 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.377 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.943 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.943 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.943 { 00:18:18.943 "cntlid": 143, 00:18:18.943 "qid": 0, 00:18:18.943 "state": "enabled", 00:18:18.943 "thread": "nvmf_tgt_poll_group_000", 00:18:18.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:18.943 "listen_address": { 00:18:18.943 "trtype": "TCP", 00:18:18.943 "adrfam": "IPv4", 00:18:18.943 "traddr": "10.0.0.2", 00:18:18.943 "trsvcid": "4420" 00:18:18.943 }, 00:18:18.943 "peer_address": { 00:18:18.944 "trtype": "TCP", 00:18:18.944 "adrfam": "IPv4", 00:18:18.944 "traddr": "10.0.0.1", 00:18:18.944 "trsvcid": "40310" 00:18:18.944 }, 00:18:18.944 "auth": { 00:18:18.944 "state": "completed", 00:18:18.944 "digest": "sha512", 00:18:18.944 "dhgroup": "ffdhe8192" 00:18:18.944 } 00:18:18.944 } 00:18:18.944 ]' 00:18:18.944 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.202 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.203 15:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.203 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.203 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.203 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.203 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.203 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.459 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:19.459 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.023 15:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.280 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.846 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.846 { 00:18:20.846 "cntlid": 145, 00:18:20.846 "qid": 0, 00:18:20.846 "state": "enabled", 00:18:20.846 "thread": "nvmf_tgt_poll_group_000", 00:18:20.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.846 "listen_address": { 00:18:20.846 "trtype": "TCP", 00:18:20.846 "adrfam": "IPv4", 00:18:20.846 "traddr": "10.0.0.2", 00:18:20.846 "trsvcid": "4420" 00:18:20.846 }, 00:18:20.846 "peer_address": { 00:18:20.846 "trtype": "TCP", 00:18:20.846 "adrfam": "IPv4", 00:18:20.846 "traddr": "10.0.0.1", 00:18:20.846 "trsvcid": "40330" 00:18:20.846 }, 00:18:20.846 "auth": { 00:18:20.846 "state": "completed", 00:18:20.846 "digest": "sha512", 00:18:20.846 "dhgroup": "ffdhe8192" 00:18:20.846 } 00:18:20.846 } 00:18:20.846 ]' 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.846 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.104 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.104 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.104 15:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.105 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:21.105 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODEyMDc1ZjQ4MzNlMGJhYTZmNjg5Zjk2NzZiZDljNjZjZThmNTc4ZDMyMmI2MmNi/Zijfg==: --dhchap-ctrl-secret DHHC-1:03:MTA1ZmQ0Y2NjZjRjZTJmODE3MDJhMDYyODg1YmY5YmI1NDBjNDBiMmUzNjZjYzRiMTA4MWFjMzc5OWE1NjdmYRJ/++E=: 00:18:21.672 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.672 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.672 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:21.931 15:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:22.190 request: 00:18:22.190 { 00:18:22.190 "name": "nvme0", 00:18:22.190 "trtype": "tcp", 00:18:22.190 "traddr": "10.0.0.2", 00:18:22.190 "adrfam": "ipv4", 00:18:22.190 "trsvcid": "4420", 00:18:22.190 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:22.190 "prchk_reftag": false, 00:18:22.190 "prchk_guard": false, 00:18:22.190 "hdgst": false, 00:18:22.190 "ddgst": false, 00:18:22.190 "dhchap_key": "key2", 00:18:22.190 "allow_unrecognized_csi": false, 00:18:22.190 "method": "bdev_nvme_attach_controller", 00:18:22.190 "req_id": 1 00:18:22.190 } 00:18:22.190 Got JSON-RPC error response 00:18:22.190 response: 00:18:22.190 { 00:18:22.190 "code": -5, 00:18:22.190 "message": "Input/output error" 00:18:22.190 } 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.190 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.759 request: 00:18:22.759 { 00:18:22.759 "name": "nvme0", 00:18:22.759 "trtype": "tcp", 00:18:22.759 "traddr": "10.0.0.2", 00:18:22.759 "adrfam": "ipv4", 00:18:22.759 "trsvcid": "4420", 00:18:22.759 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:22.759 "prchk_reftag": false, 00:18:22.759 "prchk_guard": false, 00:18:22.759 "hdgst": false, 00:18:22.759 "ddgst": false, 00:18:22.759 "dhchap_key": "key1", 00:18:22.759 "dhchap_ctrlr_key": "ckey2", 00:18:22.759 "allow_unrecognized_csi": false, 00:18:22.759 "method": "bdev_nvme_attach_controller", 00:18:22.759 "req_id": 1 00:18:22.759 } 00:18:22.759 Got JSON-RPC error response 00:18:22.759 response: 00:18:22.759 { 00:18:22.759 "code": -5, 00:18:22.759 "message": "Input/output error" 00:18:22.759 } 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.759 15:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.327 request: 00:18:23.327 { 00:18:23.327 "name": "nvme0", 00:18:23.327 "trtype": "tcp", 00:18:23.327 "traddr": "10.0.0.2", 00:18:23.327 "adrfam": "ipv4", 00:18:23.327 "trsvcid": "4420", 00:18:23.327 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:23.327 "prchk_reftag": false, 00:18:23.327 "prchk_guard": false, 00:18:23.327 "hdgst": false, 00:18:23.327 "ddgst": false, 00:18:23.327 "dhchap_key": "key1", 00:18:23.327 "dhchap_ctrlr_key": "ckey1", 00:18:23.327 "allow_unrecognized_csi": false, 00:18:23.327 "method": "bdev_nvme_attach_controller", 00:18:23.327 "req_id": 1 00:18:23.327 } 00:18:23.327 Got JSON-RPC error response 00:18:23.327 response: 00:18:23.327 { 00:18:23.327 "code": -5, 00:18:23.327 "message": "Input/output error" 00:18:23.327 } 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2990449 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2990449 ']' 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2990449 00:18:23.327 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990449 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990449' 00:18:23.328 killing process with pid 2990449 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2990449 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2990449 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.328 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.586 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3011957 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3011957 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3011957 ']' 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3011957 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3011957 ']' 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.587 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.846 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.846 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:23.846 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:23.846 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.846 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.106 null0 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1x6 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.xSn ]] 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xSn 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.d1J 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.L3l ]] 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L3l 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.106 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yj2 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.4bH ]] 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4bH 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.uYH 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.107 15:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.043 nvme0n1 00:18:25.043 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.043 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.043 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.043 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.043 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.043 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.043 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.044 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.044 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.044 { 00:18:25.044 "cntlid": 1, 00:18:25.044 "qid": 0, 00:18:25.044 "state": "enabled", 00:18:25.044 "thread": "nvmf_tgt_poll_group_000", 00:18:25.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.044 "listen_address": { 00:18:25.044 "trtype": "TCP", 00:18:25.044 "adrfam": "IPv4", 00:18:25.044 "traddr": "10.0.0.2", 00:18:25.044 "trsvcid": "4420" 00:18:25.044 }, 00:18:25.044 "peer_address": { 00:18:25.044 "trtype": "TCP", 00:18:25.044 "adrfam": "IPv4", 00:18:25.044 "traddr": "10.0.0.1", 00:18:25.044 "trsvcid": "44010" 00:18:25.044 }, 00:18:25.044 "auth": { 00:18:25.044 "state": "completed", 00:18:25.044 "digest": "sha512", 00:18:25.044 "dhgroup": "ffdhe8192" 00:18:25.044 } 00:18:25.044 } 00:18:25.044 ]' 00:18:25.044 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.044 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.044 15:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.044 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.044 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.303 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.303 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.303 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.303 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:25.303 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:25.871 15:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.131 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.390 request: 00:18:26.390 { 00:18:26.390 "name": "nvme0", 00:18:26.390 "trtype": "tcp", 00:18:26.390 "traddr": "10.0.0.2", 00:18:26.390 "adrfam": "ipv4", 00:18:26.390 "trsvcid": "4420", 00:18:26.390 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.390 "prchk_reftag": false, 00:18:26.390 "prchk_guard": false, 00:18:26.390 "hdgst": false, 00:18:26.390 "ddgst": false, 00:18:26.390 "dhchap_key": "key3", 00:18:26.390 "allow_unrecognized_csi": false, 00:18:26.390 "method": "bdev_nvme_attach_controller", 00:18:26.390 "req_id": 1 00:18:26.390 } 00:18:26.390 Got JSON-RPC error response 00:18:26.390 response: 00:18:26.390 { 00:18:26.390 "code": -5, 00:18:26.390 "message": "Input/output error" 00:18:26.390 } 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:26.390 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.649 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.650 request: 00:18:26.650 { 00:18:26.650 "name": "nvme0", 00:18:26.650 "trtype": "tcp", 00:18:26.650 "traddr": "10.0.0.2", 00:18:26.650 "adrfam": "ipv4", 00:18:26.650 "trsvcid": "4420", 00:18:26.650 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.650 "prchk_reftag": false, 00:18:26.650 "prchk_guard": false, 00:18:26.650 "hdgst": false, 00:18:26.650 "ddgst": false, 00:18:26.650 "dhchap_key": "key3", 00:18:26.650 "allow_unrecognized_csi": false, 00:18:26.650 "method": "bdev_nvme_attach_controller", 00:18:26.650 "req_id": 1 00:18:26.650 } 00:18:26.650 Got JSON-RPC error response 00:18:26.650 response: 00:18:26.650 { 00:18:26.650 "code": -5, 00:18:26.650 "message": "Input/output error" 00:18:26.650 } 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.650 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:26.909 15:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.478 request: 00:18:27.478 { 00:18:27.478 "name": "nvme0", 00:18:27.478 "trtype": "tcp", 00:18:27.478 "traddr": "10.0.0.2", 00:18:27.478 "adrfam": "ipv4", 00:18:27.478 "trsvcid": "4420", 00:18:27.478 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:27.478 "prchk_reftag": false, 00:18:27.478 "prchk_guard": false, 00:18:27.478 "hdgst": false, 00:18:27.478 "ddgst": false, 00:18:27.478 "dhchap_key": "key0", 00:18:27.478 "dhchap_ctrlr_key": "key1", 00:18:27.478 "allow_unrecognized_csi": false, 00:18:27.478 "method": "bdev_nvme_attach_controller", 00:18:27.478 "req_id": 1 00:18:27.478 } 00:18:27.478 Got JSON-RPC error response 00:18:27.478 response: 00:18:27.478 { 00:18:27.478 "code": -5, 00:18:27.478 "message": "Input/output error" 00:18:27.478 } 00:18:27.478 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:27.478 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.478 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.478 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.478 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:27.478 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:27.478 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:27.478 nvme0n1 00:18:27.737 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:27.737 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:27.737 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.737 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.737 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.737 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.997 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:27.997 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.997 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.997 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.997 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:27.997 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:27.997 15:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.934 nvme0n1 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:28.934 15:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.193 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.193 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:29.193 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: --dhchap-ctrl-secret DHHC-1:03:Mzc0NjNlNmI4YTAyM2ZmMmE1MDcwMGQ2OWM1NDRmMTAyMTI3OGM4YTNkYWU0Njk4NGJmMTkyYmI0OTdhMmZlYeugzwg=: 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.760 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:30.018 15:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:30.276 request: 00:18:30.276 { 00:18:30.276 "name": "nvme0", 00:18:30.276 "trtype": "tcp", 00:18:30.276 "traddr": "10.0.0.2", 00:18:30.276 "adrfam": "ipv4", 00:18:30.276 "trsvcid": "4420", 00:18:30.276 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:30.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:30.276 "prchk_reftag": false, 00:18:30.276 "prchk_guard": false, 00:18:30.276 "hdgst": false, 00:18:30.276 "ddgst": false, 00:18:30.276 "dhchap_key": "key1", 00:18:30.276 "allow_unrecognized_csi": false, 00:18:30.276 "method": "bdev_nvme_attach_controller", 00:18:30.276 "req_id": 1 00:18:30.276 } 00:18:30.276 Got JSON-RPC error response 00:18:30.276 response: 00:18:30.276 { 00:18:30.276 "code": -5, 00:18:30.276 "message": "Input/output error" 00:18:30.276 } 00:18:30.276 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:30.276 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.276 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.276 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.276 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.276 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:30.276 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.211 nvme0n1 00:18:31.211 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:31.211 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:31.211 15:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.211 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.211 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.211 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.471 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:31.471 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.471 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.471 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.471 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:31.471 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:31.471 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:31.729 nvme0n1 00:18:31.729 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:31.729 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:31.729 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.988 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.988 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.988 15:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: '' 2s 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: ]] 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Njg1YjY3ZDQ4MGM0MGVmZTAyOTNjZmI3MmVjYmE3YzcRZ7jo: 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:32.246 15:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: 2s 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: ]] 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTFiODAxMmFiZTE1YWY2NDM2NjA0OTQ5NTNhMTg5ODY3N2FhNTBlMjM1MGMzNzY21J/+LA==: 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:34.153 15:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:36.684 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:36.684 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.685 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:36.943 nvme0n1 00:18:36.943 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.943 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.943 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.202 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.202 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.202 15:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.461 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:37.461 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:37.461 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.720 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.720 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.720 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.720 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.720 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.720 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:37.720 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:37.978 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:37.978 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:37.978 15:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.236 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.494 request: 00:18:38.494 { 00:18:38.494 "name": "nvme0", 00:18:38.494 "dhchap_key": "key1", 00:18:38.494 "dhchap_ctrlr_key": "key3", 00:18:38.494 "method": "bdev_nvme_set_keys", 00:18:38.494 "req_id": 1 00:18:38.494 } 00:18:38.494 Got JSON-RPC error response 00:18:38.494 response: 00:18:38.494 { 00:18:38.494 "code": -13, 00:18:38.494 "message": "Permission denied" 00:18:38.494 } 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:38.753 15:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.128 15:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.694 nvme0n1 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:40.694 15:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.263 request: 00:18:41.263 { 00:18:41.263 "name": "nvme0", 00:18:41.263 "dhchap_key": "key2", 00:18:41.263 "dhchap_ctrlr_key": "key0", 00:18:41.263 "method": "bdev_nvme_set_keys", 00:18:41.263 "req_id": 1 00:18:41.263 } 00:18:41.263 Got JSON-RPC error response 00:18:41.263 response: 00:18:41.263 { 00:18:41.263 "code": -13, 00:18:41.263 "message": "Permission denied" 00:18:41.263 } 00:18:41.263 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:41.263 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.263 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.263 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.263 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:41.263 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:41.263 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.522 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:41.522 15:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:42.459 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:42.459 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:42.459 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2990478 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2990478 ']' 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2990478 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990478 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.718 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990478' 00:18:42.718 killing process with pid 2990478 00:18:42.719 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2990478 00:18:42.719 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2990478 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.978 rmmod nvme_tcp 00:18:42.978 rmmod nvme_fabrics 00:18:42.978 rmmod nvme_keyring 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3011957 ']' 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3011957 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3011957 ']' 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3011957 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.978 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3011957 00:18:43.237 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.238 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.238 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3011957' 00:18:43.238 killing process with pid 3011957 00:18:43.238 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3011957 00:18:43.238 15:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3011957 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.238 15:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1x6 /tmp/spdk.key-sha256.d1J /tmp/spdk.key-sha384.yj2 /tmp/spdk.key-sha512.uYH /tmp/spdk.key-sha512.xSn /tmp/spdk.key-sha384.L3l /tmp/spdk.key-sha256.4bH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:45.776 00:18:45.776 real 2m31.554s 00:18:45.776 user 5m49.114s 00:18:45.776 sys 0m24.284s 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.776 ************************************ 00:18:45.776 END TEST nvmf_auth_target 00:18:45.776 ************************************ 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:45.776 ************************************ 00:18:45.776 START TEST nvmf_bdevio_no_huge 00:18:45.776 ************************************ 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:45.776 * Looking for test storage... 00:18:45.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.776 --rc genhtml_branch_coverage=1 00:18:45.776 --rc genhtml_function_coverage=1 00:18:45.776 --rc genhtml_legend=1 00:18:45.776 --rc geninfo_all_blocks=1 00:18:45.776 --rc geninfo_unexecuted_blocks=1 00:18:45.776 00:18:45.776 ' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.776 --rc genhtml_branch_coverage=1 00:18:45.776 --rc genhtml_function_coverage=1 00:18:45.776 --rc genhtml_legend=1 00:18:45.776 --rc geninfo_all_blocks=1 00:18:45.776 --rc geninfo_unexecuted_blocks=1 00:18:45.776 00:18:45.776 ' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.776 --rc genhtml_branch_coverage=1 00:18:45.776 --rc genhtml_function_coverage=1 00:18:45.776 --rc genhtml_legend=1 00:18:45.776 --rc geninfo_all_blocks=1 00:18:45.776 --rc geninfo_unexecuted_blocks=1 00:18:45.776 00:18:45.776 ' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.776 --rc genhtml_branch_coverage=1 00:18:45.776 --rc genhtml_function_coverage=1 00:18:45.776 --rc genhtml_legend=1 00:18:45.776 --rc geninfo_all_blocks=1 00:18:45.776 --rc geninfo_unexecuted_blocks=1 00:18:45.776 00:18:45.776 ' 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.776 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:45.777 15:35:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:52.354 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:52.354 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:52.354 Found net devices under 0000:86:00.0: cvl_0_0 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:52.354 Found net devices under 0000:86:00.1: cvl_0_1 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:52.354 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.354 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:18:52.354 00:18:52.354 --- 10.0.0.2 ping statistics --- 00:18:52.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.354 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:52.354 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.354 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:18:52.354 00:18:52.354 --- 10.0.0.1 ping statistics --- 00:18:52.354 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.354 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3018759 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3018759 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3018759 ']' 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.354 15:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.354 [2024-12-06 15:35:57.545107] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:18:52.354 [2024-12-06 15:35:57.545159] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:52.354 [2024-12-06 15:35:57.628892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.354 [2024-12-06 15:35:57.675340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.354 [2024-12-06 15:35:57.675381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.355 [2024-12-06 15:35:57.675390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.355 [2024-12-06 15:35:57.675398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.355 [2024-12-06 15:35:57.675404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.355 [2024-12-06 15:35:57.676673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:52.355 [2024-12-06 15:35:57.676784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:52.355 [2024-12-06 15:35:57.676887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.355 [2024-12-06 15:35:57.676887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:52.613 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.613 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:52.613 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:52.613 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.613 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.613 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.614 [2024-12-06 15:35:58.425811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.614 Malloc0 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.614 [2024-12-06 15:35:58.470087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.614 { 00:18:52.614 "params": { 00:18:52.614 "name": "Nvme$subsystem", 00:18:52.614 "trtype": "$TEST_TRANSPORT", 00:18:52.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.614 "adrfam": "ipv4", 00:18:52.614 "trsvcid": "$NVMF_PORT", 00:18:52.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.614 "hdgst": ${hdgst:-false}, 00:18:52.614 "ddgst": ${ddgst:-false} 00:18:52.614 }, 00:18:52.614 "method": "bdev_nvme_attach_controller" 00:18:52.614 } 00:18:52.614 EOF 00:18:52.614 )") 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:52.614 15:35:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:52.614 "params": { 00:18:52.614 "name": "Nvme1", 00:18:52.614 "trtype": "tcp", 00:18:52.614 "traddr": "10.0.0.2", 00:18:52.614 "adrfam": "ipv4", 00:18:52.614 "trsvcid": "4420", 00:18:52.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.614 "hdgst": false, 00:18:52.614 "ddgst": false 00:18:52.614 }, 00:18:52.614 "method": "bdev_nvme_attach_controller" 00:18:52.614 }' 00:18:52.614 [2024-12-06 15:35:58.522285] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:18:52.614 [2024-12-06 15:35:58.522336] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3018877 ] 00:18:52.614 [2024-12-06 15:35:58.599681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:52.871 [2024-12-06 15:35:58.648731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.871 [2024-12-06 15:35:58.648836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.871 [2024-12-06 15:35:58.648837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.129 I/O targets: 00:18:53.129 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:53.129 00:18:53.129 00:18:53.129 CUnit - A unit testing framework for C - Version 2.1-3 00:18:53.129 http://cunit.sourceforge.net/ 00:18:53.129 00:18:53.129 00:18:53.129 Suite: bdevio tests on: Nvme1n1 00:18:53.129 Test: blockdev write read block ...passed 00:18:53.129 Test: blockdev write zeroes read block ...passed 00:18:53.129 Test: blockdev write zeroes read no split ...passed 00:18:53.129 Test: blockdev write zeroes read split ...passed 00:18:53.129 Test: blockdev write zeroes read split partial ...passed 00:18:53.129 Test: blockdev reset ...[2024-12-06 15:35:59.096085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:53.129 [2024-12-06 15:35:59.096151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d66510 (9): Bad file descriptor 00:18:53.387 [2024-12-06 15:35:59.198598] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:53.387 passed 00:18:53.387 Test: blockdev write read 8 blocks ...passed 00:18:53.387 Test: blockdev write read size > 128k ...passed 00:18:53.387 Test: blockdev write read invalid size ...passed 00:18:53.387 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:53.387 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:53.387 Test: blockdev write read max offset ...passed 00:18:53.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:53.387 Test: blockdev writev readv 8 blocks ...passed 00:18:53.387 Test: blockdev writev readv 30 x 1block ...passed 00:18:53.646 Test: blockdev writev readv block ...passed 00:18:53.646 Test: blockdev writev readv size > 128k ...passed 00:18:53.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:53.646 Test: blockdev comparev and writev ...[2024-12-06 15:35:59.455255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.646 [2024-12-06 15:35:59.455281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:53.646 [2024-12-06 15:35:59.455296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.647 [2024-12-06 15:35:59.455303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.455532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.647 [2024-12-06 15:35:59.455543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.455555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.647 [2024-12-06 15:35:59.455562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.455775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.647 [2024-12-06 15:35:59.455785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.455797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.647 [2024-12-06 15:35:59.455803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.456024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.647 [2024-12-06 15:35:59.456034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.456046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:53.647 [2024-12-06 15:35:59.456052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:53.647 passed 00:18:53.647 Test: blockdev nvme passthru rw ...passed 00:18:53.647 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:35:59.537758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.647 [2024-12-06 15:35:59.537774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.537876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.647 [2024-12-06 15:35:59.537885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.537992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.647 [2024-12-06 15:35:59.538008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:53.647 [2024-12-06 15:35:59.538102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:53.647 [2024-12-06 15:35:59.538112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:53.647 passed 00:18:53.647 Test: blockdev nvme admin passthru ...passed 00:18:53.647 Test: blockdev copy ...passed 00:18:53.647 00:18:53.647 Run Summary: Type Total Ran Passed Failed Inactive 00:18:53.647 suites 1 1 n/a 0 0 00:18:53.647 tests 23 23 23 0 0 00:18:53.647 asserts 152 152 152 0 n/a 00:18:53.647 00:18:53.647 Elapsed time = 1.346 seconds 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.906 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.906 rmmod nvme_tcp 00:18:54.165 rmmod nvme_fabrics 00:18:54.165 rmmod nvme_keyring 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3018759 ']' 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3018759 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3018759 ']' 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3018759 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3018759 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3018759' 00:18:54.165 killing process with pid 3018759 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3018759 00:18:54.165 15:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3018759 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.425 15:36:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:56.964 00:18:56.964 real 0m11.067s 00:18:56.964 user 0m14.578s 00:18:56.964 sys 0m5.444s 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:56.964 ************************************ 00:18:56.964 END TEST nvmf_bdevio_no_huge 00:18:56.964 ************************************ 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.964 ************************************ 00:18:56.964 START TEST nvmf_tls 00:18:56.964 ************************************ 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:56.964 * Looking for test storage... 00:18:56.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.964 --rc genhtml_branch_coverage=1 00:18:56.964 --rc genhtml_function_coverage=1 00:18:56.964 --rc genhtml_legend=1 00:18:56.964 --rc geninfo_all_blocks=1 00:18:56.964 --rc geninfo_unexecuted_blocks=1 00:18:56.964 00:18:56.964 ' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.964 --rc genhtml_branch_coverage=1 00:18:56.964 --rc genhtml_function_coverage=1 00:18:56.964 --rc genhtml_legend=1 00:18:56.964 --rc geninfo_all_blocks=1 00:18:56.964 --rc geninfo_unexecuted_blocks=1 00:18:56.964 00:18:56.964 ' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.964 --rc genhtml_branch_coverage=1 00:18:56.964 --rc genhtml_function_coverage=1 00:18:56.964 --rc genhtml_legend=1 00:18:56.964 --rc geninfo_all_blocks=1 00:18:56.964 --rc geninfo_unexecuted_blocks=1 00:18:56.964 00:18:56.964 ' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:56.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.964 --rc genhtml_branch_coverage=1 00:18:56.964 --rc genhtml_function_coverage=1 00:18:56.964 --rc genhtml_legend=1 00:18:56.964 --rc geninfo_all_blocks=1 00:18:56.964 --rc geninfo_unexecuted_blocks=1 00:18:56.964 00:18:56.964 ' 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.964 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:56.965 15:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:03.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:03.683 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:03.683 Found net devices under 0000:86:00.0: cvl_0_0 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:03.683 Found net devices under 0000:86:00.1: cvl_0_1 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.683 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:03.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:19:03.684 00:19:03.684 --- 10.0.0.2 ping statistics --- 00:19:03.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.684 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:19:03.684 00:19:03.684 --- 10.0.0.1 ping statistics --- 00:19:03.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.684 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3022944 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3022944 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3022944 ']' 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.684 [2024-12-06 15:36:08.715765] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:03.684 [2024-12-06 15:36:08.715812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.684 [2024-12-06 15:36:08.796279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.684 [2024-12-06 15:36:08.836606] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.684 [2024-12-06 15:36:08.836643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.684 [2024-12-06 15:36:08.836650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.684 [2024-12-06 15:36:08.836657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.684 [2024-12-06 15:36:08.836662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.684 [2024-12-06 15:36:08.837249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:03.684 15:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:03.684 true 00:19:03.684 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:03.684 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.684 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:03.684 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:03.684 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:03.684 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.684 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:03.950 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:03.950 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:03.950 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:03.950 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.950 15:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:04.209 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:04.209 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:04.209 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.209 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:04.469 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:04.469 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:04.469 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:04.728 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.728 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:04.728 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:04.728 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:04.728 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:04.987 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.987 15:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.47p34iWV6v 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.OLnr7ru9kN 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:05.252 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:05.253 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.47p34iWV6v 00:19:05.253 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.OLnr7ru9kN 00:19:05.253 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:05.516 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:05.774 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.47p34iWV6v 00:19:05.774 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.47p34iWV6v 00:19:05.774 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.032 [2024-12-06 15:36:11.785312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.032 15:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.032 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:06.290 [2024-12-06 15:36:12.182318] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.290 [2024-12-06 15:36:12.182538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.290 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:06.548 malloc0 00:19:06.548 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:06.805 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.47p34iWV6v 00:19:06.805 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.063 15:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.47p34iWV6v 00:19:19.292 Initializing NVMe Controllers 00:19:19.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:19.292 Initialization complete. Launching workers. 00:19:19.292 ======================================================== 00:19:19.292 Latency(us) 00:19:19.292 Device Information : IOPS MiB/s Average min max 00:19:19.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16910.75 66.06 3784.63 830.10 5925.59 00:19:19.292 ======================================================== 00:19:19.292 Total : 16910.75 66.06 3784.63 830.10 5925.59 00:19:19.292 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.47p34iWV6v 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.47p34iWV6v 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3025722 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3025722 /var/tmp/bdevperf.sock 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3025722 ']' 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.292 [2024-12-06 15:36:23.155570] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:19.292 [2024-12-06 15:36:23.155621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3025722 ] 00:19:19.292 [2024-12-06 15:36:23.231317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.292 [2024-12-06 15:36:23.270928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.47p34iWV6v 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.292 [2024-12-06 15:36:23.723045] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.292 TLSTESTn1 00:19:19.292 15:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:19.292 Running I/O for 10 seconds... 00:19:20.226 5427.00 IOPS, 21.20 MiB/s [2024-12-06T14:36:27.160Z] 5539.50 IOPS, 21.64 MiB/s [2024-12-06T14:36:28.098Z] 5519.33 IOPS, 21.56 MiB/s [2024-12-06T14:36:29.034Z] 5538.75 IOPS, 21.64 MiB/s [2024-12-06T14:36:29.970Z] 5545.20 IOPS, 21.66 MiB/s [2024-12-06T14:36:31.349Z] 5555.33 IOPS, 21.70 MiB/s [2024-12-06T14:36:32.284Z] 5454.29 IOPS, 21.31 MiB/s [2024-12-06T14:36:33.222Z] 5390.00 IOPS, 21.05 MiB/s [2024-12-06T14:36:34.157Z] 5332.00 IOPS, 20.83 MiB/s [2024-12-06T14:36:34.157Z] 5268.30 IOPS, 20.58 MiB/s 00:19:28.159 Latency(us) 00:19:28.159 [2024-12-06T14:36:34.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.159 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.159 Verification LBA range: start 0x0 length 0x2000 00:19:28.159 TLSTESTn1 : 10.02 5272.44 20.60 0.00 0.00 24241.18 6366.35 34453.21 00:19:28.159 [2024-12-06T14:36:34.157Z] =================================================================================================================== 00:19:28.159 [2024-12-06T14:36:34.157Z] Total : 5272.44 20.60 0.00 0.00 24241.18 6366.35 34453.21 00:19:28.159 { 00:19:28.159 "results": [ 00:19:28.159 { 00:19:28.159 "job": "TLSTESTn1", 00:19:28.159 "core_mask": "0x4", 00:19:28.159 "workload": "verify", 00:19:28.159 "status": "finished", 00:19:28.159 "verify_range": { 00:19:28.159 "start": 0, 00:19:28.159 "length": 8192 00:19:28.159 }, 00:19:28.159 "queue_depth": 128, 00:19:28.159 "io_size": 4096, 00:19:28.159 "runtime": 10.016236, 00:19:28.159 "iops": 5272.439666956729, 00:19:28.159 "mibps": 20.59546744904972, 00:19:28.159 "io_failed": 0, 00:19:28.159 "io_timeout": 0, 00:19:28.159 "avg_latency_us": 24241.178767801914, 00:19:28.159 "min_latency_us": 6366.354285714286, 00:19:28.159 "max_latency_us": 34453.21142857143 00:19:28.159 } 00:19:28.159 ], 00:19:28.159 "core_count": 1 00:19:28.159 } 00:19:28.159 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.160 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3025722 00:19:28.160 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3025722 ']' 00:19:28.160 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3025722 00:19:28.160 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.160 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.160 15:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3025722 00:19:28.160 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.160 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.160 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3025722' 00:19:28.160 killing process with pid 3025722 00:19:28.160 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3025722 00:19:28.160 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.160 00:19:28.160 Latency(us) 00:19:28.160 [2024-12-06T14:36:34.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.160 [2024-12-06T14:36:34.158Z] =================================================================================================================== 00:19:28.160 [2024-12-06T14:36:34.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.160 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3025722 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OLnr7ru9kN 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OLnr7ru9kN 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OLnr7ru9kN 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OLnr7ru9kN 00:19:28.418 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3027426 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3027426 /var/tmp/bdevperf.sock 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3027426 ']' 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.419 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.419 [2024-12-06 15:36:34.240900] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:28.419 [2024-12-06 15:36:34.240951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027426 ] 00:19:28.419 [2024-12-06 15:36:34.313440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.419 [2024-12-06 15:36:34.352187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.675 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.675 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.675 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OLnr7ru9kN 00:19:28.675 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.933 [2024-12-06 15:36:34.804400] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.933 [2024-12-06 15:36:34.811665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:28.933 [2024-12-06 15:36:34.811859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42dc0 (107): Transport endpoint is not connected 00:19:28.933 [2024-12-06 15:36:34.812853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42dc0 (9): Bad file descriptor 00:19:28.933 [2024-12-06 15:36:34.813855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:28.933 [2024-12-06 15:36:34.813868] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:28.933 [2024-12-06 15:36:34.813876] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:28.933 [2024-12-06 15:36:34.813886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:28.933 request: 00:19:28.933 { 00:19:28.933 "name": "TLSTEST", 00:19:28.933 "trtype": "tcp", 00:19:28.933 "traddr": "10.0.0.2", 00:19:28.933 "adrfam": "ipv4", 00:19:28.933 "trsvcid": "4420", 00:19:28.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.933 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.933 "prchk_reftag": false, 00:19:28.933 "prchk_guard": false, 00:19:28.933 "hdgst": false, 00:19:28.933 "ddgst": false, 00:19:28.933 "psk": "key0", 00:19:28.933 "allow_unrecognized_csi": false, 00:19:28.933 "method": "bdev_nvme_attach_controller", 00:19:28.933 "req_id": 1 00:19:28.933 } 00:19:28.933 Got JSON-RPC error response 00:19:28.933 response: 00:19:28.933 { 00:19:28.933 "code": -5, 00:19:28.933 "message": "Input/output error" 00:19:28.933 } 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3027426 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3027426 ']' 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3027426 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3027426 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3027426' 00:19:28.933 killing process with pid 3027426 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3027426 00:19:28.933 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.933 00:19:28.933 Latency(us) 00:19:28.933 [2024-12-06T14:36:34.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.933 [2024-12-06T14:36:34.931Z] =================================================================================================================== 00:19:28.933 [2024-12-06T14:36:34.931Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.933 15:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3027426 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.47p34iWV6v 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.47p34iWV6v 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.47p34iWV6v 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.47p34iWV6v 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3027579 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3027579 /var/tmp/bdevperf.sock 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3027579 ']' 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.190 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.190 [2024-12-06 15:36:35.086397] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:29.190 [2024-12-06 15:36:35.086445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027579 ] 00:19:29.190 [2024-12-06 15:36:35.162909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.448 [2024-12-06 15:36:35.200587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.448 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.448 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.448 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.47p34iWV6v 00:19:29.705 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:29.705 [2024-12-06 15:36:35.675816] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.705 [2024-12-06 15:36:35.683784] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:29.705 [2024-12-06 15:36:35.683806] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:29.705 [2024-12-06 15:36:35.683833] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:29.705 [2024-12-06 15:36:35.684182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f7dc0 (107): Transport endpoint is not connected 00:19:29.705 [2024-12-06 15:36:35.685176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f7dc0 (9): Bad file descriptor 00:19:29.705 [2024-12-06 15:36:35.686178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:29.705 [2024-12-06 15:36:35.686189] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:29.705 [2024-12-06 15:36:35.686196] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:29.706 [2024-12-06 15:36:35.686207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:29.706 request: 00:19:29.706 { 00:19:29.706 "name": "TLSTEST", 00:19:29.706 "trtype": "tcp", 00:19:29.706 "traddr": "10.0.0.2", 00:19:29.706 "adrfam": "ipv4", 00:19:29.706 "trsvcid": "4420", 00:19:29.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.706 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:29.706 "prchk_reftag": false, 00:19:29.706 "prchk_guard": false, 00:19:29.706 "hdgst": false, 00:19:29.706 "ddgst": false, 00:19:29.706 "psk": "key0", 00:19:29.706 "allow_unrecognized_csi": false, 00:19:29.706 "method": "bdev_nvme_attach_controller", 00:19:29.706 "req_id": 1 00:19:29.706 } 00:19:29.706 Got JSON-RPC error response 00:19:29.706 response: 00:19:29.706 { 00:19:29.706 "code": -5, 00:19:29.706 "message": "Input/output error" 00:19:29.706 } 00:19:29.963 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3027579 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3027579 ']' 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3027579 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3027579 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3027579' 00:19:29.964 killing process with pid 3027579 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3027579 00:19:29.964 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.964 00:19:29.964 Latency(us) 00:19:29.964 [2024-12-06T14:36:35.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.964 [2024-12-06T14:36:35.962Z] =================================================================================================================== 00:19:29.964 [2024-12-06T14:36:35.962Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3027579 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.47p34iWV6v 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.47p34iWV6v 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.47p34iWV6v 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.47p34iWV6v 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3027813 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3027813 /var/tmp/bdevperf.sock 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3027813 ']' 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.964 15:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.221 [2024-12-06 15:36:35.967019] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:30.221 [2024-12-06 15:36:35.967071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027813 ] 00:19:30.221 [2024-12-06 15:36:36.036489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.221 [2024-12-06 15:36:36.073716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.221 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.221 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:30.221 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.47p34iWV6v 00:19:30.479 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.737 [2024-12-06 15:36:36.525879] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.737 [2024-12-06 15:36:36.535977] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:30.737 [2024-12-06 15:36:36.535998] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:30.737 [2024-12-06 15:36:36.536037] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:30.737 [2024-12-06 15:36:36.536291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2402dc0 (107): Transport endpoint is not connected 00:19:30.737 [2024-12-06 15:36:36.537285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2402dc0 (9): Bad file descriptor 00:19:30.737 [2024-12-06 15:36:36.538287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:30.737 [2024-12-06 15:36:36.538297] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:30.737 [2024-12-06 15:36:36.538304] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:30.737 [2024-12-06 15:36:36.538316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:30.737 request: 00:19:30.737 { 00:19:30.737 "name": "TLSTEST", 00:19:30.737 "trtype": "tcp", 00:19:30.737 "traddr": "10.0.0.2", 00:19:30.737 "adrfam": "ipv4", 00:19:30.737 "trsvcid": "4420", 00:19:30.737 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:30.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.737 "prchk_reftag": false, 00:19:30.737 "prchk_guard": false, 00:19:30.737 "hdgst": false, 00:19:30.737 "ddgst": false, 00:19:30.737 "psk": "key0", 00:19:30.737 "allow_unrecognized_csi": false, 00:19:30.737 "method": "bdev_nvme_attach_controller", 00:19:30.737 "req_id": 1 00:19:30.737 } 00:19:30.738 Got JSON-RPC error response 00:19:30.738 response: 00:19:30.738 { 00:19:30.738 "code": -5, 00:19:30.738 "message": "Input/output error" 00:19:30.738 } 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3027813 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3027813 ']' 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3027813 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3027813 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3027813' 00:19:30.738 killing process with pid 3027813 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3027813 00:19:30.738 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.738 00:19:30.738 Latency(us) 00:19:30.738 [2024-12-06T14:36:36.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.738 [2024-12-06T14:36:36.736Z] =================================================================================================================== 00:19:30.738 [2024-12-06T14:36:36.736Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.738 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3027813 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.996 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3027825 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3027825 /var/tmp/bdevperf.sock 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3027825 ']' 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.997 15:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.997 [2024-12-06 15:36:36.823847] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:30.997 [2024-12-06 15:36:36.823898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3027825 ] 00:19:30.997 [2024-12-06 15:36:36.889695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.997 [2024-12-06 15:36:36.927891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.255 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.255 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.255 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:31.255 [2024-12-06 15:36:37.206896] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:31.255 [2024-12-06 15:36:37.206930] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:31.255 request: 00:19:31.255 { 00:19:31.255 "name": "key0", 00:19:31.255 "path": "", 00:19:31.255 "method": "keyring_file_add_key", 00:19:31.255 "req_id": 1 00:19:31.255 } 00:19:31.255 Got JSON-RPC error response 00:19:31.255 response: 00:19:31.255 { 00:19:31.255 "code": -1, 00:19:31.255 "message": "Operation not permitted" 00:19:31.255 } 00:19:31.255 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.514 [2024-12-06 15:36:37.403513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.514 [2024-12-06 15:36:37.403546] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:31.514 request: 00:19:31.514 { 00:19:31.514 "name": "TLSTEST", 00:19:31.514 "trtype": "tcp", 00:19:31.514 "traddr": "10.0.0.2", 00:19:31.514 "adrfam": "ipv4", 00:19:31.514 "trsvcid": "4420", 00:19:31.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.514 "prchk_reftag": false, 00:19:31.514 "prchk_guard": false, 00:19:31.514 "hdgst": false, 00:19:31.514 "ddgst": false, 00:19:31.514 "psk": "key0", 00:19:31.514 "allow_unrecognized_csi": false, 00:19:31.514 "method": "bdev_nvme_attach_controller", 00:19:31.514 "req_id": 1 00:19:31.514 } 00:19:31.514 Got JSON-RPC error response 00:19:31.514 response: 00:19:31.514 { 00:19:31.514 "code": -126, 00:19:31.514 "message": "Required key not available" 00:19:31.514 } 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3027825 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3027825 ']' 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3027825 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3027825 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3027825' 00:19:31.514 killing process with pid 3027825 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3027825 00:19:31.514 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.514 00:19:31.514 Latency(us) 00:19:31.514 [2024-12-06T14:36:37.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.514 [2024-12-06T14:36:37.512Z] =================================================================================================================== 00:19:31.514 [2024-12-06T14:36:37.512Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.514 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3027825 00:19:31.772 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:31.772 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:31.772 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3022944 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3022944 ']' 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3022944 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3022944 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3022944' 00:19:31.773 killing process with pid 3022944 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3022944 00:19:31.773 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3022944 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.vGqeJPsI7M 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.vGqeJPsI7M 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3028080 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3028080 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3028080 ']' 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.031 15:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.031 [2024-12-06 15:36:37.962988] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:32.031 [2024-12-06 15:36:37.963039] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.289 [2024-12-06 15:36:38.038495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.289 [2024-12-06 15:36:38.075672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.289 [2024-12-06 15:36:38.075705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.289 [2024-12-06 15:36:38.075714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.290 [2024-12-06 15:36:38.075720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.290 [2024-12-06 15:36:38.075725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.290 [2024-12-06 15:36:38.076235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.vGqeJPsI7M 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vGqeJPsI7M 00:19:32.290 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:32.548 [2024-12-06 15:36:38.392039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.548 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:32.806 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:32.806 [2024-12-06 15:36:38.789043] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.806 [2024-12-06 15:36:38.789226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.063 15:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.063 malloc0 00:19:33.063 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.321 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:19:33.577 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vGqeJPsI7M 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vGqeJPsI7M 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3028331 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3028331 /var/tmp/bdevperf.sock 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3028331 ']' 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.836 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.836 [2024-12-06 15:36:39.622861] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:33.836 [2024-12-06 15:36:39.622909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028331 ] 00:19:33.836 [2024-12-06 15:36:39.696718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.836 [2024-12-06 15:36:39.737042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.095 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.095 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.095 15:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:19:34.095 15:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.354 [2024-12-06 15:36:40.213067] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.354 TLSTESTn1 00:19:34.354 15:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:34.612 Running I/O for 10 seconds... 00:19:36.485 5546.00 IOPS, 21.66 MiB/s [2024-12-06T14:36:43.419Z] 5536.00 IOPS, 21.62 MiB/s [2024-12-06T14:36:44.798Z] 5551.33 IOPS, 21.68 MiB/s [2024-12-06T14:36:45.733Z] 5581.25 IOPS, 21.80 MiB/s [2024-12-06T14:36:46.668Z] 5563.20 IOPS, 21.73 MiB/s [2024-12-06T14:36:47.604Z] 5581.50 IOPS, 21.80 MiB/s [2024-12-06T14:36:48.540Z] 5588.86 IOPS, 21.83 MiB/s [2024-12-06T14:36:49.507Z] 5590.00 IOPS, 21.84 MiB/s [2024-12-06T14:36:50.497Z] 5595.89 IOPS, 21.86 MiB/s [2024-12-06T14:36:50.497Z] 5599.90 IOPS, 21.87 MiB/s 00:19:44.499 Latency(us) 00:19:44.499 [2024-12-06T14:36:50.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.499 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:44.499 Verification LBA range: start 0x0 length 0x2000 00:19:44.499 TLSTESTn1 : 10.01 5605.09 21.89 0.00 0.00 22802.80 5742.20 23842.62 00:19:44.499 [2024-12-06T14:36:50.497Z] =================================================================================================================== 00:19:44.499 [2024-12-06T14:36:50.497Z] Total : 5605.09 21.89 0.00 0.00 22802.80 5742.20 23842.62 00:19:44.499 { 00:19:44.499 "results": [ 00:19:44.499 { 00:19:44.499 "job": "TLSTESTn1", 00:19:44.499 "core_mask": "0x4", 00:19:44.499 "workload": "verify", 00:19:44.499 "status": "finished", 00:19:44.499 "verify_range": { 00:19:44.499 "start": 0, 00:19:44.499 "length": 8192 00:19:44.499 }, 00:19:44.499 "queue_depth": 128, 00:19:44.499 "io_size": 4096, 00:19:44.499 "runtime": 10.013212, 00:19:44.499 "iops": 5605.094549081753, 00:19:44.499 "mibps": 21.894900582350598, 00:19:44.499 "io_failed": 0, 00:19:44.499 "io_timeout": 0, 00:19:44.499 "avg_latency_us": 22802.796430620427, 00:19:44.499 "min_latency_us": 5742.201904761905, 00:19:44.499 "max_latency_us": 23842.620952380952 00:19:44.499 } 00:19:44.499 ], 00:19:44.499 "core_count": 1 00:19:44.499 } 00:19:44.499 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:44.499 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3028331 00:19:44.499 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3028331 ']' 00:19:44.499 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3028331 00:19:44.499 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:44.499 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.499 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3028331 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3028331' 00:19:44.759 killing process with pid 3028331 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3028331 00:19:44.759 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.759 00:19:44.759 Latency(us) 00:19:44.759 [2024-12-06T14:36:50.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.759 [2024-12-06T14:36:50.757Z] =================================================================================================================== 00:19:44.759 [2024-12-06T14:36:50.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3028331 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.vGqeJPsI7M 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vGqeJPsI7M 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vGqeJPsI7M 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vGqeJPsI7M 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vGqeJPsI7M 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3030170 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3030170 /var/tmp/bdevperf.sock 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3030170 ']' 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.759 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.759 [2024-12-06 15:36:50.723195] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:44.759 [2024-12-06 15:36:50.723243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030170 ] 00:19:45.018 [2024-12-06 15:36:50.794984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.019 [2024-12-06 15:36:50.835549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.019 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.019 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.019 15:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:19:45.277 [2024-12-06 15:36:51.107967] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vGqeJPsI7M': 0100666 00:19:45.277 [2024-12-06 15:36:51.108000] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:45.277 request: 00:19:45.277 { 00:19:45.277 "name": "key0", 00:19:45.277 "path": "/tmp/tmp.vGqeJPsI7M", 00:19:45.277 "method": "keyring_file_add_key", 00:19:45.277 "req_id": 1 00:19:45.277 } 00:19:45.277 Got JSON-RPC error response 00:19:45.277 response: 00:19:45.277 { 00:19:45.277 "code": -1, 00:19:45.277 "message": "Operation not permitted" 00:19:45.277 } 00:19:45.277 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.536 [2024-12-06 15:36:51.320598] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:45.536 [2024-12-06 15:36:51.320629] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:45.536 request: 00:19:45.536 { 00:19:45.536 "name": "TLSTEST", 00:19:45.536 "trtype": "tcp", 00:19:45.536 "traddr": "10.0.0.2", 00:19:45.536 "adrfam": "ipv4", 00:19:45.536 "trsvcid": "4420", 00:19:45.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:45.536 "prchk_reftag": false, 00:19:45.536 "prchk_guard": false, 00:19:45.536 "hdgst": false, 00:19:45.536 "ddgst": false, 00:19:45.536 "psk": "key0", 00:19:45.536 "allow_unrecognized_csi": false, 00:19:45.536 "method": "bdev_nvme_attach_controller", 00:19:45.536 "req_id": 1 00:19:45.536 } 00:19:45.536 Got JSON-RPC error response 00:19:45.536 response: 00:19:45.536 { 00:19:45.536 "code": -126, 00:19:45.536 "message": "Required key not available" 00:19:45.536 } 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3030170 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3030170 ']' 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3030170 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030170 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030170' 00:19:45.536 killing process with pid 3030170 00:19:45.536 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3030170 00:19:45.536 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.536 00:19:45.537 Latency(us) 00:19:45.537 [2024-12-06T14:36:51.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.537 [2024-12-06T14:36:51.535Z] =================================================================================================================== 00:19:45.537 [2024-12-06T14:36:51.535Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:45.537 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3030170 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3028080 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3028080 ']' 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3028080 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3028080 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3028080' 00:19:45.796 killing process with pid 3028080 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3028080 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3028080 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3030411 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3030411 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3030411 ']' 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.796 15:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.056 [2024-12-06 15:36:51.816190] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:46.056 [2024-12-06 15:36:51.816237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.056 [2024-12-06 15:36:51.894363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.056 [2024-12-06 15:36:51.937817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.056 [2024-12-06 15:36:51.937851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.056 [2024-12-06 15:36:51.937858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.056 [2024-12-06 15:36:51.937864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.056 [2024-12-06 15:36:51.937869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.056 [2024-12-06 15:36:51.938426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.056 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.056 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.056 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.056 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.056 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.vGqeJPsI7M 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vGqeJPsI7M 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.vGqeJPsI7M 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vGqeJPsI7M 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.315 [2024-12-06 15:36:52.238400] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.315 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:46.595 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.854 [2024-12-06 15:36:52.643448] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.854 [2024-12-06 15:36:52.643653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.854 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.112 malloc0 00:19:47.112 15:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.112 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:19:47.370 [2024-12-06 15:36:53.248988] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vGqeJPsI7M': 0100666 00:19:47.370 [2024-12-06 15:36:53.249012] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:47.370 request: 00:19:47.370 { 00:19:47.370 "name": "key0", 00:19:47.370 "path": "/tmp/tmp.vGqeJPsI7M", 00:19:47.370 "method": "keyring_file_add_key", 00:19:47.370 "req_id": 1 00:19:47.370 } 00:19:47.370 Got JSON-RPC error response 00:19:47.370 response: 00:19:47.370 { 00:19:47.370 "code": -1, 00:19:47.370 "message": "Operation not permitted" 00:19:47.370 } 00:19:47.370 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:47.629 [2024-12-06 15:36:53.445519] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:47.629 [2024-12-06 15:36:53.445548] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:47.629 request: 00:19:47.629 { 00:19:47.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.629 "host": "nqn.2016-06.io.spdk:host1", 00:19:47.629 "psk": "key0", 00:19:47.629 "method": "nvmf_subsystem_add_host", 00:19:47.629 "req_id": 1 00:19:47.629 } 00:19:47.629 Got JSON-RPC error response 00:19:47.629 response: 00:19:47.629 { 00:19:47.629 "code": -32603, 00:19:47.629 "message": "Internal error" 00:19:47.629 } 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3030411 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3030411 ']' 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3030411 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030411 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030411' 00:19:47.629 killing process with pid 3030411 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3030411 00:19:47.629 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3030411 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.vGqeJPsI7M 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3030682 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3030682 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3030682 ']' 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.887 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.887 [2024-12-06 15:36:53.742256] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:47.887 [2024-12-06 15:36:53.742304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.887 [2024-12-06 15:36:53.818363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.887 [2024-12-06 15:36:53.859195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.887 [2024-12-06 15:36:53.859232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.887 [2024-12-06 15:36:53.859240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.887 [2024-12-06 15:36:53.859246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.887 [2024-12-06 15:36:53.859252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.887 [2024-12-06 15:36:53.859834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.vGqeJPsI7M 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vGqeJPsI7M 00:19:48.146 15:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:48.403 [2024-12-06 15:36:54.173431] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.403 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:48.692 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:48.692 [2024-12-06 15:36:54.566430] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.692 [2024-12-06 15:36:54.566624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.692 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:48.950 malloc0 00:19:48.950 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.208 15:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:19:49.208 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3031001 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3031001 /var/tmp/bdevperf.sock 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3031001 ']' 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.465 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.465 [2024-12-06 15:36:55.423514] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:49.465 [2024-12-06 15:36:55.423563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031001 ] 00:19:49.722 [2024-12-06 15:36:55.498943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.722 [2024-12-06 15:36:55.539019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.722 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.722 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:49.722 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:19:49.979 15:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.236 [2024-12-06 15:36:55.998436] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.236 TLSTESTn1 00:19:50.236 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:50.492 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:50.492 "subsystems": [ 00:19:50.492 { 00:19:50.492 "subsystem": "keyring", 00:19:50.492 "config": [ 00:19:50.492 { 00:19:50.492 "method": "keyring_file_add_key", 00:19:50.492 "params": { 00:19:50.492 "name": "key0", 00:19:50.492 "path": "/tmp/tmp.vGqeJPsI7M" 00:19:50.492 } 00:19:50.492 } 00:19:50.492 ] 00:19:50.492 }, 00:19:50.492 { 00:19:50.492 "subsystem": "iobuf", 00:19:50.492 "config": [ 00:19:50.492 { 00:19:50.492 "method": "iobuf_set_options", 00:19:50.492 "params": { 00:19:50.493 "small_pool_count": 8192, 00:19:50.493 "large_pool_count": 1024, 00:19:50.493 "small_bufsize": 8192, 00:19:50.493 "large_bufsize": 135168, 00:19:50.493 "enable_numa": false 00:19:50.493 } 00:19:50.493 } 00:19:50.493 ] 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "subsystem": "sock", 00:19:50.493 "config": [ 00:19:50.493 { 00:19:50.493 "method": "sock_set_default_impl", 00:19:50.493 "params": { 00:19:50.493 "impl_name": "posix" 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "sock_impl_set_options", 00:19:50.493 "params": { 00:19:50.493 "impl_name": "ssl", 00:19:50.493 "recv_buf_size": 4096, 00:19:50.493 "send_buf_size": 4096, 00:19:50.493 "enable_recv_pipe": true, 00:19:50.493 "enable_quickack": false, 00:19:50.493 "enable_placement_id": 0, 00:19:50.493 "enable_zerocopy_send_server": true, 00:19:50.493 "enable_zerocopy_send_client": false, 00:19:50.493 "zerocopy_threshold": 0, 00:19:50.493 "tls_version": 0, 00:19:50.493 "enable_ktls": false 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "sock_impl_set_options", 00:19:50.493 "params": { 00:19:50.493 "impl_name": "posix", 00:19:50.493 "recv_buf_size": 2097152, 00:19:50.493 "send_buf_size": 2097152, 00:19:50.493 "enable_recv_pipe": true, 00:19:50.493 "enable_quickack": false, 00:19:50.493 "enable_placement_id": 0, 00:19:50.493 "enable_zerocopy_send_server": true, 00:19:50.493 "enable_zerocopy_send_client": false, 00:19:50.493 "zerocopy_threshold": 0, 00:19:50.493 "tls_version": 0, 00:19:50.493 "enable_ktls": false 00:19:50.493 } 00:19:50.493 } 00:19:50.493 ] 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "subsystem": "vmd", 00:19:50.493 "config": [] 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "subsystem": "accel", 00:19:50.493 "config": [ 00:19:50.493 { 00:19:50.493 "method": "accel_set_options", 00:19:50.493 "params": { 00:19:50.493 "small_cache_size": 128, 00:19:50.493 "large_cache_size": 16, 00:19:50.493 "task_count": 2048, 00:19:50.493 "sequence_count": 2048, 00:19:50.493 "buf_count": 2048 00:19:50.493 } 00:19:50.493 } 00:19:50.493 ] 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "subsystem": "bdev", 00:19:50.493 "config": [ 00:19:50.493 { 00:19:50.493 "method": "bdev_set_options", 00:19:50.493 "params": { 00:19:50.493 "bdev_io_pool_size": 65535, 00:19:50.493 "bdev_io_cache_size": 256, 00:19:50.493 "bdev_auto_examine": true, 00:19:50.493 "iobuf_small_cache_size": 128, 00:19:50.493 "iobuf_large_cache_size": 16 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "bdev_raid_set_options", 00:19:50.493 "params": { 00:19:50.493 "process_window_size_kb": 1024, 00:19:50.493 "process_max_bandwidth_mb_sec": 0 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "bdev_iscsi_set_options", 00:19:50.493 "params": { 00:19:50.493 "timeout_sec": 30 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "bdev_nvme_set_options", 00:19:50.493 "params": { 00:19:50.493 "action_on_timeout": "none", 00:19:50.493 "timeout_us": 0, 00:19:50.493 "timeout_admin_us": 0, 00:19:50.493 "keep_alive_timeout_ms": 10000, 00:19:50.493 "arbitration_burst": 0, 00:19:50.493 "low_priority_weight": 0, 00:19:50.493 "medium_priority_weight": 0, 00:19:50.493 "high_priority_weight": 0, 00:19:50.493 "nvme_adminq_poll_period_us": 10000, 00:19:50.493 "nvme_ioq_poll_period_us": 0, 00:19:50.493 "io_queue_requests": 0, 00:19:50.493 "delay_cmd_submit": true, 00:19:50.493 "transport_retry_count": 4, 00:19:50.493 "bdev_retry_count": 3, 00:19:50.493 "transport_ack_timeout": 0, 00:19:50.493 "ctrlr_loss_timeout_sec": 0, 00:19:50.493 "reconnect_delay_sec": 0, 00:19:50.493 "fast_io_fail_timeout_sec": 0, 00:19:50.493 "disable_auto_failback": false, 00:19:50.493 "generate_uuids": false, 00:19:50.493 "transport_tos": 0, 00:19:50.493 "nvme_error_stat": false, 00:19:50.493 "rdma_srq_size": 0, 00:19:50.493 "io_path_stat": false, 00:19:50.493 "allow_accel_sequence": false, 00:19:50.493 "rdma_max_cq_size": 0, 00:19:50.493 "rdma_cm_event_timeout_ms": 0, 00:19:50.493 "dhchap_digests": [ 00:19:50.493 "sha256", 00:19:50.493 "sha384", 00:19:50.493 "sha512" 00:19:50.493 ], 00:19:50.493 "dhchap_dhgroups": [ 00:19:50.493 "null", 00:19:50.493 "ffdhe2048", 00:19:50.493 "ffdhe3072", 00:19:50.493 "ffdhe4096", 00:19:50.493 "ffdhe6144", 00:19:50.493 "ffdhe8192" 00:19:50.493 ] 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "bdev_nvme_set_hotplug", 00:19:50.493 "params": { 00:19:50.493 "period_us": 100000, 00:19:50.493 "enable": false 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "bdev_malloc_create", 00:19:50.493 "params": { 00:19:50.493 "name": "malloc0", 00:19:50.493 "num_blocks": 8192, 00:19:50.493 "block_size": 4096, 00:19:50.493 "physical_block_size": 4096, 00:19:50.493 "uuid": "4f6aafb3-0489-4053-a0e9-97ec6f1cc65a", 00:19:50.493 "optimal_io_boundary": 0, 00:19:50.493 "md_size": 0, 00:19:50.493 "dif_type": 0, 00:19:50.493 "dif_is_head_of_md": false, 00:19:50.493 "dif_pi_format": 0 00:19:50.493 } 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "bdev_wait_for_examine" 00:19:50.493 } 00:19:50.493 ] 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "subsystem": "nbd", 00:19:50.493 "config": [] 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "subsystem": "scheduler", 00:19:50.493 "config": [ 00:19:50.493 { 00:19:50.493 "method": "framework_set_scheduler", 00:19:50.493 "params": { 00:19:50.493 "name": "static" 00:19:50.493 } 00:19:50.493 } 00:19:50.493 ] 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "subsystem": "nvmf", 00:19:50.493 "config": [ 00:19:50.493 { 00:19:50.493 "method": "nvmf_set_config", 00:19:50.493 "params": { 00:19:50.493 "discovery_filter": "match_any", 00:19:50.493 "admin_cmd_passthru": { 00:19:50.493 "identify_ctrlr": false 00:19:50.493 }, 00:19:50.493 "dhchap_digests": [ 00:19:50.493 "sha256", 00:19:50.493 "sha384", 00:19:50.493 "sha512" 00:19:50.493 ], 00:19:50.493 "dhchap_dhgroups": [ 00:19:50.493 "null", 00:19:50.493 "ffdhe2048", 00:19:50.493 "ffdhe3072", 00:19:50.493 "ffdhe4096", 00:19:50.493 "ffdhe6144", 00:19:50.494 "ffdhe8192" 00:19:50.494 ] 00:19:50.494 } 00:19:50.494 }, 00:19:50.494 { 00:19:50.494 "method": "nvmf_set_max_subsystems", 00:19:50.494 "params": { 00:19:50.494 "max_subsystems": 1024 00:19:50.494 } 00:19:50.494 }, 00:19:50.494 { 00:19:50.494 "method": "nvmf_set_crdt", 00:19:50.494 "params": { 00:19:50.494 "crdt1": 0, 00:19:50.494 "crdt2": 0, 00:19:50.494 "crdt3": 0 00:19:50.494 } 00:19:50.494 }, 00:19:50.494 { 00:19:50.494 "method": "nvmf_create_transport", 00:19:50.494 "params": { 00:19:50.494 "trtype": "TCP", 00:19:50.494 "max_queue_depth": 128, 00:19:50.494 "max_io_qpairs_per_ctrlr": 127, 00:19:50.494 "in_capsule_data_size": 4096, 00:19:50.494 "max_io_size": 131072, 00:19:50.494 "io_unit_size": 131072, 00:19:50.494 "max_aq_depth": 128, 00:19:50.494 "num_shared_buffers": 511, 00:19:50.494 "buf_cache_size": 4294967295, 00:19:50.494 "dif_insert_or_strip": false, 00:19:50.494 "zcopy": false, 00:19:50.494 "c2h_success": false, 00:19:50.494 "sock_priority": 0, 00:19:50.494 "abort_timeout_sec": 1, 00:19:50.494 "ack_timeout": 0, 00:19:50.494 "data_wr_pool_size": 0 00:19:50.494 } 00:19:50.494 }, 00:19:50.494 { 00:19:50.494 "method": "nvmf_create_subsystem", 00:19:50.494 "params": { 00:19:50.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.494 "allow_any_host": false, 00:19:50.494 "serial_number": "SPDK00000000000001", 00:19:50.494 "model_number": "SPDK bdev Controller", 00:19:50.494 "max_namespaces": 10, 00:19:50.494 "min_cntlid": 1, 00:19:50.494 "max_cntlid": 65519, 00:19:50.494 "ana_reporting": false 00:19:50.494 } 00:19:50.494 }, 00:19:50.494 { 00:19:50.494 "method": "nvmf_subsystem_add_host", 00:19:50.494 "params": { 00:19:50.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.494 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.494 "psk": "key0" 00:19:50.494 } 00:19:50.494 }, 00:19:50.494 { 00:19:50.494 "method": "nvmf_subsystem_add_ns", 00:19:50.494 "params": { 00:19:50.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.494 "namespace": { 00:19:50.494 "nsid": 1, 00:19:50.494 "bdev_name": "malloc0", 00:19:50.494 "nguid": "4F6AAFB304894053A0E997EC6F1CC65A", 00:19:50.494 "uuid": "4f6aafb3-0489-4053-a0e9-97ec6f1cc65a", 00:19:50.494 "no_auto_visible": false 00:19:50.494 } 00:19:50.494 } 00:19:50.494 }, 00:19:50.494 { 00:19:50.494 "method": "nvmf_subsystem_add_listener", 00:19:50.494 "params": { 00:19:50.494 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.494 "listen_address": { 00:19:50.494 "trtype": "TCP", 00:19:50.494 "adrfam": "IPv4", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "trsvcid": "4420" 00:19:50.494 }, 00:19:50.494 "secure_channel": true 00:19:50.494 } 00:19:50.494 } 00:19:50.494 ] 00:19:50.494 } 00:19:50.494 ] 00:19:50.494 }' 00:19:50.494 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:50.752 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:50.752 "subsystems": [ 00:19:50.752 { 00:19:50.752 "subsystem": "keyring", 00:19:50.752 "config": [ 00:19:50.752 { 00:19:50.752 "method": "keyring_file_add_key", 00:19:50.752 "params": { 00:19:50.752 "name": "key0", 00:19:50.752 "path": "/tmp/tmp.vGqeJPsI7M" 00:19:50.753 } 00:19:50.753 } 00:19:50.753 ] 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "subsystem": "iobuf", 00:19:50.753 "config": [ 00:19:50.753 { 00:19:50.753 "method": "iobuf_set_options", 00:19:50.753 "params": { 00:19:50.753 "small_pool_count": 8192, 00:19:50.753 "large_pool_count": 1024, 00:19:50.753 "small_bufsize": 8192, 00:19:50.753 "large_bufsize": 135168, 00:19:50.753 "enable_numa": false 00:19:50.753 } 00:19:50.753 } 00:19:50.753 ] 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "subsystem": "sock", 00:19:50.753 "config": [ 00:19:50.753 { 00:19:50.753 "method": "sock_set_default_impl", 00:19:50.753 "params": { 00:19:50.753 "impl_name": "posix" 00:19:50.753 } 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "method": "sock_impl_set_options", 00:19:50.753 "params": { 00:19:50.753 "impl_name": "ssl", 00:19:50.753 "recv_buf_size": 4096, 00:19:50.753 "send_buf_size": 4096, 00:19:50.753 "enable_recv_pipe": true, 00:19:50.753 "enable_quickack": false, 00:19:50.753 "enable_placement_id": 0, 00:19:50.753 "enable_zerocopy_send_server": true, 00:19:50.753 "enable_zerocopy_send_client": false, 00:19:50.753 "zerocopy_threshold": 0, 00:19:50.753 "tls_version": 0, 00:19:50.753 "enable_ktls": false 00:19:50.753 } 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "method": "sock_impl_set_options", 00:19:50.753 "params": { 00:19:50.753 "impl_name": "posix", 00:19:50.753 "recv_buf_size": 2097152, 00:19:50.753 "send_buf_size": 2097152, 00:19:50.753 "enable_recv_pipe": true, 00:19:50.753 "enable_quickack": false, 00:19:50.753 "enable_placement_id": 0, 00:19:50.753 "enable_zerocopy_send_server": true, 00:19:50.753 "enable_zerocopy_send_client": false, 00:19:50.753 "zerocopy_threshold": 0, 00:19:50.753 "tls_version": 0, 00:19:50.753 "enable_ktls": false 00:19:50.753 } 00:19:50.753 } 00:19:50.753 ] 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "subsystem": "vmd", 00:19:50.753 "config": [] 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "subsystem": "accel", 00:19:50.753 "config": [ 00:19:50.753 { 00:19:50.753 "method": "accel_set_options", 00:19:50.753 "params": { 00:19:50.753 "small_cache_size": 128, 00:19:50.753 "large_cache_size": 16, 00:19:50.753 "task_count": 2048, 00:19:50.753 "sequence_count": 2048, 00:19:50.753 "buf_count": 2048 00:19:50.753 } 00:19:50.753 } 00:19:50.753 ] 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "subsystem": "bdev", 00:19:50.753 "config": [ 00:19:50.753 { 00:19:50.753 "method": "bdev_set_options", 00:19:50.753 "params": { 00:19:50.753 "bdev_io_pool_size": 65535, 00:19:50.753 "bdev_io_cache_size": 256, 00:19:50.753 "bdev_auto_examine": true, 00:19:50.753 "iobuf_small_cache_size": 128, 00:19:50.753 "iobuf_large_cache_size": 16 00:19:50.753 } 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "method": "bdev_raid_set_options", 00:19:50.753 "params": { 00:19:50.753 "process_window_size_kb": 1024, 00:19:50.753 "process_max_bandwidth_mb_sec": 0 00:19:50.753 } 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "method": "bdev_iscsi_set_options", 00:19:50.753 "params": { 00:19:50.753 "timeout_sec": 30 00:19:50.753 } 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "method": "bdev_nvme_set_options", 00:19:50.753 "params": { 00:19:50.753 "action_on_timeout": "none", 00:19:50.753 "timeout_us": 0, 00:19:50.753 "timeout_admin_us": 0, 00:19:50.753 "keep_alive_timeout_ms": 10000, 00:19:50.753 "arbitration_burst": 0, 00:19:50.753 "low_priority_weight": 0, 00:19:50.753 "medium_priority_weight": 0, 00:19:50.753 "high_priority_weight": 0, 00:19:50.753 "nvme_adminq_poll_period_us": 10000, 00:19:50.753 "nvme_ioq_poll_period_us": 0, 00:19:50.753 "io_queue_requests": 512, 00:19:50.753 "delay_cmd_submit": true, 00:19:50.753 "transport_retry_count": 4, 00:19:50.753 "bdev_retry_count": 3, 00:19:50.753 "transport_ack_timeout": 0, 00:19:50.753 "ctrlr_loss_timeout_sec": 0, 00:19:50.753 "reconnect_delay_sec": 0, 00:19:50.753 "fast_io_fail_timeout_sec": 0, 00:19:50.753 "disable_auto_failback": false, 00:19:50.753 "generate_uuids": false, 00:19:50.753 "transport_tos": 0, 00:19:50.753 "nvme_error_stat": false, 00:19:50.753 "rdma_srq_size": 0, 00:19:50.753 "io_path_stat": false, 00:19:50.753 "allow_accel_sequence": false, 00:19:50.753 "rdma_max_cq_size": 0, 00:19:50.753 "rdma_cm_event_timeout_ms": 0, 00:19:50.753 "dhchap_digests": [ 00:19:50.753 "sha256", 00:19:50.753 "sha384", 00:19:50.753 "sha512" 00:19:50.753 ], 00:19:50.753 "dhchap_dhgroups": [ 00:19:50.753 "null", 00:19:50.753 "ffdhe2048", 00:19:50.753 "ffdhe3072", 00:19:50.753 "ffdhe4096", 00:19:50.753 "ffdhe6144", 00:19:50.753 "ffdhe8192" 00:19:50.753 ] 00:19:50.753 } 00:19:50.753 }, 00:19:50.753 { 00:19:50.753 "method": "bdev_nvme_attach_controller", 00:19:50.753 "params": { 00:19:50.753 "name": "TLSTEST", 00:19:50.753 "trtype": "TCP", 00:19:50.753 "adrfam": "IPv4", 00:19:50.753 "traddr": "10.0.0.2", 00:19:50.753 "trsvcid": "4420", 00:19:50.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.753 "prchk_reftag": false, 00:19:50.753 "prchk_guard": false, 00:19:50.753 "ctrlr_loss_timeout_sec": 0, 00:19:50.753 "reconnect_delay_sec": 0, 00:19:50.753 "fast_io_fail_timeout_sec": 0, 00:19:50.753 "psk": "key0", 00:19:50.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.754 "hdgst": false, 00:19:50.754 "ddgst": false, 00:19:50.754 "multipath": "multipath" 00:19:50.754 } 00:19:50.754 }, 00:19:50.754 { 00:19:50.754 "method": "bdev_nvme_set_hotplug", 00:19:50.754 "params": { 00:19:50.754 "period_us": 100000, 00:19:50.754 "enable": false 00:19:50.754 } 00:19:50.754 }, 00:19:50.754 { 00:19:50.754 "method": "bdev_wait_for_examine" 00:19:50.754 } 00:19:50.754 ] 00:19:50.754 }, 00:19:50.754 { 00:19:50.754 "subsystem": "nbd", 00:19:50.754 "config": [] 00:19:50.754 } 00:19:50.754 ] 00:19:50.754 }' 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3031001 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3031001 ']' 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3031001 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031001 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031001' 00:19:50.754 killing process with pid 3031001 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3031001 00:19:50.754 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.754 00:19:50.754 Latency(us) 00:19:50.754 [2024-12-06T14:36:56.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.754 [2024-12-06T14:36:56.752Z] =================================================================================================================== 00:19:50.754 [2024-12-06T14:36:56.752Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.754 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3031001 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3030682 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3030682 ']' 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3030682 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030682 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030682' 00:19:51.013 killing process with pid 3030682 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3030682 00:19:51.013 15:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3030682 00:19:51.273 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:51.273 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.273 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.273 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:51.273 "subsystems": [ 00:19:51.273 { 00:19:51.273 "subsystem": "keyring", 00:19:51.273 "config": [ 00:19:51.273 { 00:19:51.273 "method": "keyring_file_add_key", 00:19:51.273 "params": { 00:19:51.273 "name": "key0", 00:19:51.273 "path": "/tmp/tmp.vGqeJPsI7M" 00:19:51.273 } 00:19:51.273 } 00:19:51.273 ] 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "subsystem": "iobuf", 00:19:51.273 "config": [ 00:19:51.273 { 00:19:51.273 "method": "iobuf_set_options", 00:19:51.273 "params": { 00:19:51.273 "small_pool_count": 8192, 00:19:51.273 "large_pool_count": 1024, 00:19:51.273 "small_bufsize": 8192, 00:19:51.273 "large_bufsize": 135168, 00:19:51.273 "enable_numa": false 00:19:51.273 } 00:19:51.273 } 00:19:51.273 ] 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "subsystem": "sock", 00:19:51.273 "config": [ 00:19:51.273 { 00:19:51.273 "method": "sock_set_default_impl", 00:19:51.273 "params": { 00:19:51.273 "impl_name": "posix" 00:19:51.273 } 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "method": "sock_impl_set_options", 00:19:51.273 "params": { 00:19:51.273 "impl_name": "ssl", 00:19:51.273 "recv_buf_size": 4096, 00:19:51.273 "send_buf_size": 4096, 00:19:51.273 "enable_recv_pipe": true, 00:19:51.273 "enable_quickack": false, 00:19:51.273 "enable_placement_id": 0, 00:19:51.273 "enable_zerocopy_send_server": true, 00:19:51.273 "enable_zerocopy_send_client": false, 00:19:51.273 "zerocopy_threshold": 0, 00:19:51.273 "tls_version": 0, 00:19:51.273 "enable_ktls": false 00:19:51.273 } 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "method": "sock_impl_set_options", 00:19:51.273 "params": { 00:19:51.273 "impl_name": "posix", 00:19:51.273 "recv_buf_size": 2097152, 00:19:51.273 "send_buf_size": 2097152, 00:19:51.273 "enable_recv_pipe": true, 00:19:51.273 "enable_quickack": false, 00:19:51.273 "enable_placement_id": 0, 00:19:51.273 "enable_zerocopy_send_server": true, 00:19:51.273 "enable_zerocopy_send_client": false, 00:19:51.273 "zerocopy_threshold": 0, 00:19:51.273 "tls_version": 0, 00:19:51.273 "enable_ktls": false 00:19:51.273 } 00:19:51.273 } 00:19:51.273 ] 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "subsystem": "vmd", 00:19:51.273 "config": [] 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "subsystem": "accel", 00:19:51.273 "config": [ 00:19:51.273 { 00:19:51.273 "method": "accel_set_options", 00:19:51.273 "params": { 00:19:51.273 "small_cache_size": 128, 00:19:51.273 "large_cache_size": 16, 00:19:51.273 "task_count": 2048, 00:19:51.273 "sequence_count": 2048, 00:19:51.273 "buf_count": 2048 00:19:51.273 } 00:19:51.273 } 00:19:51.273 ] 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "subsystem": "bdev", 00:19:51.273 "config": [ 00:19:51.273 { 00:19:51.273 "method": "bdev_set_options", 00:19:51.273 "params": { 00:19:51.273 "bdev_io_pool_size": 65535, 00:19:51.273 "bdev_io_cache_size": 256, 00:19:51.273 "bdev_auto_examine": true, 00:19:51.273 "iobuf_small_cache_size": 128, 00:19:51.273 "iobuf_large_cache_size": 16 00:19:51.273 } 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "method": "bdev_raid_set_options", 00:19:51.273 "params": { 00:19:51.273 "process_window_size_kb": 1024, 00:19:51.273 "process_max_bandwidth_mb_sec": 0 00:19:51.273 } 00:19:51.273 }, 00:19:51.273 { 00:19:51.273 "method": "bdev_iscsi_set_options", 00:19:51.273 "params": { 00:19:51.273 "timeout_sec": 30 00:19:51.273 } 00:19:51.273 }, 00:19:51.274 { 00:19:51.274 "method": "bdev_nvme_set_options", 00:19:51.274 "params": { 00:19:51.274 "action_on_timeout": "none", 00:19:51.274 "timeout_us": 0, 00:19:51.274 "timeout_admin_us": 0, 00:19:51.274 "keep_alive_timeout_ms": 10000, 00:19:51.274 "arbitration_burst": 0, 00:19:51.274 "low_priority_weight": 0, 00:19:51.274 "medium_priority_weight": 0, 00:19:51.274 "high_priority_weight": 0, 00:19:51.274 "nvme_adminq_poll_period_us": 10000, 00:19:51.274 "nvme_ioq_poll_period_us": 0, 00:19:51.274 "io_queue_requests": 0, 00:19:51.274 "delay_cmd_submit": true, 00:19:51.274 "transport_retry_count": 4, 00:19:51.274 "bdev_retry_count": 3, 00:19:51.274 "transport_ack_timeout": 0, 00:19:51.274 "ctrlr_loss_timeout_sec": 0, 00:19:51.274 "reconnect_delay_sec": 0, 00:19:51.274 "fast_io_fail_timeout_sec": 0, 00:19:51.274 "disable_auto_failback": false, 00:19:51.274 "generate_uuids": false, 00:19:51.274 "transport_tos": 0, 00:19:51.274 "nvme_error_stat": false, 00:19:51.274 "rdma_srq_size": 0, 00:19:51.274 "io_path_stat": false, 00:19:51.274 "allow_accel_sequence": false, 00:19:51.274 "rdma_max_cq_size": 0, 00:19:51.274 "rdma_cm_event_timeout_ms": 0, 00:19:51.274 "dhchap_digests": [ 00:19:51.274 "sha256", 00:19:51.274 "sha384", 00:19:51.274 "sha512" 00:19:51.274 ], 00:19:51.274 "dhchap_dhgroups": [ 00:19:51.274 "null", 00:19:51.274 "ffdhe2048", 00:19:51.274 "ffdhe3072", 00:19:51.274 "ffdhe4096", 00:19:51.274 "ffdhe6144", 00:19:51.274 "ffdhe8192" 00:19:51.274 ] 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "bdev_nvme_set_hotplug", 00:19:51.274 "params": { 00:19:51.274 "period_us": 100000, 00:19:51.274 "enable": false 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "bdev_malloc_create", 00:19:51.274 "params": { 00:19:51.274 "name": "malloc0", 00:19:51.274 "num_blocks": 8192, 00:19:51.274 "block_size": 4096, 00:19:51.274 "physical_block_size": 4096, 00:19:51.274 "uuid": "4f6aafb3-0489-4053-a0e9-97ec6f1cc65a", 00:19:51.274 "optimal_io_boundary": 0, 00:19:51.274 "md_size": 0, 00:19:51.274 "dif_type": 0, 00:19:51.274 "dif_is_head_of_md": false, 00:19:51.274 "dif_pi_format": 0 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "bdev_wait_for_examine" 00:19:51.274 } 00:19:51.274 ] 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "subsystem": "nbd", 00:19:51.274 "config": [] 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "subsystem": "scheduler", 00:19:51.274 "config": [ 00:19:51.274 { 00:19:51.274 "method": "framework_set_scheduler", 00:19:51.274 "params": { 00:19:51.274 "name": "static" 00:19:51.274 } 00:19:51.274 } 00:19:51.274 ] 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "subsystem": "nvmf", 00:19:51.274 "config": [ 00:19:51.274 { 00:19:51.274 "method": "nvmf_set_config", 00:19:51.274 "params": { 00:19:51.274 "discovery_filter": "match_any", 00:19:51.274 "admin_cmd_passthru": { 00:19:51.274 "identify_ctrlr": false 00:19:51.274 }, 00:19:51.274 "dhchap_digests": [ 00:19:51.274 "sha256", 00:19:51.274 "sha384", 00:19:51.274 "sha512" 00:19:51.274 ], 00:19:51.274 "dhchap_dhgroups": [ 00:19:51.274 "null", 00:19:51.274 "ffdhe2048", 00:19:51.274 "ffdhe3072", 00:19:51.274 "ffdhe4096", 00:19:51.274 "ffdhe6144", 00:19:51.274 "ffdhe8192" 00:19:51.274 ] 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "nvmf_set_max_subsystems", 00:19:51.274 "params": { 00:19:51.274 "max_subsystems": 1024 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "nvmf_set_crdt", 00:19:51.274 "params": { 00:19:51.274 "crdt1": 0, 00:19:51.274 "crdt2": 0, 00:19:51.274 "crdt3": 0 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "nvmf_create_transport", 00:19:51.274 "params": { 00:19:51.274 "trtype": "TCP", 00:19:51.274 "max_queue_depth": 128, 00:19:51.274 "max_io_qpairs_per_ctrlr": 127, 00:19:51.274 "in_capsule_data_size": 4096, 00:19:51.274 "max_io_size": 131072, 00:19:51.274 "io_unit_size": 131072, 00:19:51.274 "max_aq_depth": 128, 00:19:51.274 "num_shared_buffers": 511, 00:19:51.274 "buf_cache_size": 4294967295, 00:19:51.274 "dif_insert_or_strip": false, 00:19:51.274 "zcopy": false, 00:19:51.274 "c2h_success": false, 00:19:51.274 "sock_priority": 0, 00:19:51.274 "abort_timeout_sec": 1, 00:19:51.274 "ack_timeout": 0, 00:19:51.274 "data_wr_pool_size": 0 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "nvmf_create_subsystem", 00:19:51.274 "params": { 00:19:51.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.274 "allow_any_host": false, 00:19:51.274 "serial_number": "SPDK00000000000001", 00:19:51.274 "model_number": "SPDK bdev Controller", 00:19:51.274 "max_namespaces": 10, 00:19:51.274 "min_cntlid": 1, 00:19:51.274 "max_cntlid": 65519, 00:19:51.274 "ana_reporting": false 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "nvmf_subsystem_add_host", 00:19:51.274 "params": { 00:19:51.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.274 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.274 "psk": "key0" 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "nvmf_subsystem_add_ns", 00:19:51.274 "params": { 00:19:51.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.274 "namespace": { 00:19:51.274 "nsid": 1, 00:19:51.274 "bdev_name": "malloc0", 00:19:51.274 "nguid": "4F6AAFB304894053A0E997EC6F1CC65A", 00:19:51.274 "uuid": "4f6aafb3-0489-4053-a0e9-97ec6f1cc65a", 00:19:51.274 "no_auto_visible": false 00:19:51.274 } 00:19:51.274 } 00:19:51.274 }, 00:19:51.274 { 00:19:51.274 "method": "nvmf_subsystem_add_listener", 00:19:51.274 "params": { 00:19:51.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.274 "listen_address": { 00:19:51.274 "trtype": "TCP", 00:19:51.274 "adrfam": "IPv4", 00:19:51.274 "traddr": "10.0.0.2", 00:19:51.275 "trsvcid": "4420" 00:19:51.275 }, 00:19:51.275 "secure_channel": true 00:19:51.275 } 00:19:51.275 } 00:19:51.275 ] 00:19:51.275 } 00:19:51.275 ] 00:19:51.275 }' 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3031401 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3031401 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3031401 ']' 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.275 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.275 [2024-12-06 15:36:57.110471] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:51.275 [2024-12-06 15:36:57.110518] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.275 [2024-12-06 15:36:57.182752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.275 [2024-12-06 15:36:57.222715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.275 [2024-12-06 15:36:57.222749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.275 [2024-12-06 15:36:57.222755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.275 [2024-12-06 15:36:57.222762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.275 [2024-12-06 15:36:57.222767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.275 [2024-12-06 15:36:57.223346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.534 [2024-12-06 15:36:57.435785] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.534 [2024-12-06 15:36:57.467803] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:51.534 [2024-12-06 15:36:57.468015] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3031433 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3031433 /var/tmp/bdevperf.sock 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3031433 ']' 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:52.102 "subsystems": [ 00:19:52.102 { 00:19:52.102 "subsystem": "keyring", 00:19:52.102 "config": [ 00:19:52.102 { 00:19:52.102 "method": "keyring_file_add_key", 00:19:52.102 "params": { 00:19:52.102 "name": "key0", 00:19:52.102 "path": "/tmp/tmp.vGqeJPsI7M" 00:19:52.102 } 00:19:52.102 } 00:19:52.102 ] 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "subsystem": "iobuf", 00:19:52.102 "config": [ 00:19:52.102 { 00:19:52.102 "method": "iobuf_set_options", 00:19:52.102 "params": { 00:19:52.102 "small_pool_count": 8192, 00:19:52.102 "large_pool_count": 1024, 00:19:52.102 "small_bufsize": 8192, 00:19:52.102 "large_bufsize": 135168, 00:19:52.102 "enable_numa": false 00:19:52.102 } 00:19:52.102 } 00:19:52.102 ] 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "subsystem": "sock", 00:19:52.102 "config": [ 00:19:52.102 { 00:19:52.102 "method": "sock_set_default_impl", 00:19:52.102 "params": { 00:19:52.102 "impl_name": "posix" 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "sock_impl_set_options", 00:19:52.102 "params": { 00:19:52.102 "impl_name": "ssl", 00:19:52.102 "recv_buf_size": 4096, 00:19:52.102 "send_buf_size": 4096, 00:19:52.102 "enable_recv_pipe": true, 00:19:52.102 "enable_quickack": false, 00:19:52.102 "enable_placement_id": 0, 00:19:52.102 "enable_zerocopy_send_server": true, 00:19:52.102 "enable_zerocopy_send_client": false, 00:19:52.102 "zerocopy_threshold": 0, 00:19:52.102 "tls_version": 0, 00:19:52.102 "enable_ktls": false 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "sock_impl_set_options", 00:19:52.102 "params": { 00:19:52.102 "impl_name": "posix", 00:19:52.102 "recv_buf_size": 2097152, 00:19:52.102 "send_buf_size": 2097152, 00:19:52.102 "enable_recv_pipe": true, 00:19:52.102 "enable_quickack": false, 00:19:52.102 "enable_placement_id": 0, 00:19:52.102 "enable_zerocopy_send_server": true, 00:19:52.102 "enable_zerocopy_send_client": false, 00:19:52.102 "zerocopy_threshold": 0, 00:19:52.102 "tls_version": 0, 00:19:52.102 "enable_ktls": false 00:19:52.102 } 00:19:52.102 } 00:19:52.102 ] 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "subsystem": "vmd", 00:19:52.102 "config": [] 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "subsystem": "accel", 00:19:52.102 "config": [ 00:19:52.102 { 00:19:52.102 "method": "accel_set_options", 00:19:52.102 "params": { 00:19:52.102 "small_cache_size": 128, 00:19:52.102 "large_cache_size": 16, 00:19:52.102 "task_count": 2048, 00:19:52.102 "sequence_count": 2048, 00:19:52.102 "buf_count": 2048 00:19:52.102 } 00:19:52.102 } 00:19:52.102 ] 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "subsystem": "bdev", 00:19:52.102 "config": [ 00:19:52.102 { 00:19:52.102 "method": "bdev_set_options", 00:19:52.102 "params": { 00:19:52.102 "bdev_io_pool_size": 65535, 00:19:52.102 "bdev_io_cache_size": 256, 00:19:52.102 "bdev_auto_examine": true, 00:19:52.102 "iobuf_small_cache_size": 128, 00:19:52.102 "iobuf_large_cache_size": 16 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "bdev_raid_set_options", 00:19:52.102 "params": { 00:19:52.102 "process_window_size_kb": 1024, 00:19:52.102 "process_max_bandwidth_mb_sec": 0 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "bdev_iscsi_set_options", 00:19:52.102 "params": { 00:19:52.102 "timeout_sec": 30 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "bdev_nvme_set_options", 00:19:52.102 "params": { 00:19:52.102 "action_on_timeout": "none", 00:19:52.102 "timeout_us": 0, 00:19:52.102 "timeout_admin_us": 0, 00:19:52.102 "keep_alive_timeout_ms": 10000, 00:19:52.102 "arbitration_burst": 0, 00:19:52.102 "low_priority_weight": 0, 00:19:52.102 "medium_priority_weight": 0, 00:19:52.102 "high_priority_weight": 0, 00:19:52.102 "nvme_adminq_poll_period_us": 10000, 00:19:52.102 "nvme_ioq_poll_period_us": 0, 00:19:52.102 "io_queue_requests": 512, 00:19:52.102 "delay_cmd_submit": true, 00:19:52.102 "transport_retry_count": 4, 00:19:52.102 "bdev_retry_count": 3, 00:19:52.102 "transport_ack_timeout": 0, 00:19:52.102 "ctrlr_loss_timeout_sec": 0, 00:19:52.102 "reconnect_delay_sec": 0, 00:19:52.102 "fast_io_fail_timeout_sec": 0, 00:19:52.102 "disable_auto_failback": false, 00:19:52.102 "generate_uuids": false, 00:19:52.102 "transport_tos": 0, 00:19:52.102 "nvme_error_stat": false, 00:19:52.102 "rdma_srq_size": 0, 00:19:52.102 "io_path_stat": false, 00:19:52.102 "allow_accel_sequence": false, 00:19:52.102 "rdma_max_cq_size": 0, 00:19:52.102 "rdma_cm_event_timeout_ms": 0, 00:19:52.102 "dhchap_digests": [ 00:19:52.102 "sha256", 00:19:52.102 "sha384", 00:19:52.102 "sha512" 00:19:52.102 ], 00:19:52.102 "dhchap_dhgroups": [ 00:19:52.102 "null", 00:19:52.102 "ffdhe2048", 00:19:52.102 "ffdhe3072", 00:19:52.102 "ffdhe4096", 00:19:52.102 "ffdhe6144", 00:19:52.102 "ffdhe8192" 00:19:52.102 ] 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "bdev_nvme_attach_controller", 00:19:52.102 "params": { 00:19:52.102 "name": "TLSTEST", 00:19:52.102 "trtype": "TCP", 00:19:52.102 "adrfam": "IPv4", 00:19:52.102 "traddr": "10.0.0.2", 00:19:52.102 "trsvcid": "4420", 00:19:52.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.102 "prchk_reftag": false, 00:19:52.102 "prchk_guard": false, 00:19:52.102 "ctrlr_loss_timeout_sec": 0, 00:19:52.102 "reconnect_delay_sec": 0, 00:19:52.102 "fast_io_fail_timeout_sec": 0, 00:19:52.102 "psk": "key0", 00:19:52.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.102 "hdgst": false, 00:19:52.102 "ddgst": false, 00:19:52.102 "multipath": "multipath" 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "bdev_nvme_set_hotplug", 00:19:52.102 "params": { 00:19:52.102 "period_us": 100000, 00:19:52.102 "enable": false 00:19:52.102 } 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "method": "bdev_wait_for_examine" 00:19:52.102 } 00:19:52.102 ] 00:19:52.102 }, 00:19:52.102 { 00:19:52.102 "subsystem": "nbd", 00:19:52.102 "config": [] 00:19:52.102 } 00:19:52.102 ] 00:19:52.102 }' 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.102 15:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.102 [2024-12-06 15:36:58.017241] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:19:52.102 [2024-12-06 15:36:58.017287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031433 ] 00:19:52.102 [2024-12-06 15:36:58.091288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.361 [2024-12-06 15:36:58.131632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.361 [2024-12-06 15:36:58.285331] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.928 15:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.928 15:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.928 15:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:53.187 Running I/O for 10 seconds... 00:19:55.057 5361.00 IOPS, 20.94 MiB/s [2024-12-06T14:37:01.989Z] 5410.50 IOPS, 21.13 MiB/s [2024-12-06T14:37:03.365Z] 5455.00 IOPS, 21.31 MiB/s [2024-12-06T14:37:04.299Z] 5504.00 IOPS, 21.50 MiB/s [2024-12-06T14:37:05.235Z] 5523.80 IOPS, 21.58 MiB/s [2024-12-06T14:37:06.169Z] 5542.50 IOPS, 21.65 MiB/s [2024-12-06T14:37:07.104Z] 5536.43 IOPS, 21.63 MiB/s [2024-12-06T14:37:08.041Z] 5543.00 IOPS, 21.65 MiB/s [2024-12-06T14:37:08.977Z] 5542.44 IOPS, 21.65 MiB/s [2024-12-06T14:37:09.236Z] 5525.80 IOPS, 21.59 MiB/s 00:20:03.238 Latency(us) 00:20:03.238 [2024-12-06T14:37:09.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.238 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.238 Verification LBA range: start 0x0 length 0x2000 00:20:03.238 TLSTESTn1 : 10.02 5529.12 21.60 0.00 0.00 23113.61 6616.02 40694.74 00:20:03.238 [2024-12-06T14:37:09.236Z] =================================================================================================================== 00:20:03.238 [2024-12-06T14:37:09.236Z] Total : 5529.12 21.60 0.00 0.00 23113.61 6616.02 40694.74 00:20:03.238 { 00:20:03.238 "results": [ 00:20:03.238 { 00:20:03.238 "job": "TLSTESTn1", 00:20:03.238 "core_mask": "0x4", 00:20:03.238 "workload": "verify", 00:20:03.238 "status": "finished", 00:20:03.238 "verify_range": { 00:20:03.238 "start": 0, 00:20:03.238 "length": 8192 00:20:03.238 }, 00:20:03.238 "queue_depth": 128, 00:20:03.238 "io_size": 4096, 00:20:03.238 "runtime": 10.017152, 00:20:03.238 "iops": 5529.116459448754, 00:20:03.238 "mibps": 21.598111169721694, 00:20:03.238 "io_failed": 0, 00:20:03.238 "io_timeout": 0, 00:20:03.238 "avg_latency_us": 23113.609540248268, 00:20:03.238 "min_latency_us": 6616.015238095238, 00:20:03.238 "max_latency_us": 40694.735238095236 00:20:03.238 } 00:20:03.238 ], 00:20:03.238 "core_count": 1 00:20:03.238 } 00:20:03.238 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.238 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3031433 00:20:03.238 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3031433 ']' 00:20:03.238 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3031433 00:20:03.238 15:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031433 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031433' 00:20:03.238 killing process with pid 3031433 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3031433 00:20:03.238 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.238 00:20:03.238 Latency(us) 00:20:03.238 [2024-12-06T14:37:09.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.238 [2024-12-06T14:37:09.236Z] =================================================================================================================== 00:20:03.238 [2024-12-06T14:37:09.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3031433 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3031401 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3031401 ']' 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3031401 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.238 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031401 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031401' 00:20:03.498 killing process with pid 3031401 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3031401 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3031401 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3033291 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3033291 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3033291 ']' 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.498 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.498 [2024-12-06 15:37:09.488868] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:03.498 [2024-12-06 15:37:09.488918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.758 [2024-12-06 15:37:09.565260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.758 [2024-12-06 15:37:09.603661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.758 [2024-12-06 15:37:09.603698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.758 [2024-12-06 15:37:09.603706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.758 [2024-12-06 15:37:09.603711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.758 [2024-12-06 15:37:09.603717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.758 [2024-12-06 15:37:09.604302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.vGqeJPsI7M 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vGqeJPsI7M 00:20:03.758 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.017 [2024-12-06 15:37:09.908436] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.017 15:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:04.276 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:04.535 [2024-12-06 15:37:10.301446] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:04.536 [2024-12-06 15:37:10.301658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.536 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.536 malloc0 00:20:04.794 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.794 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:20:05.054 15:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3033706 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3033706 /var/tmp/bdevperf.sock 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3033706 ']' 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.314 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.314 [2024-12-06 15:37:11.175728] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:05.314 [2024-12-06 15:37:11.175777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033706 ] 00:20:05.314 [2024-12-06 15:37:11.252344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.314 [2024-12-06 15:37:11.292491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.573 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.573 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:05.573 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:20:05.832 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:05.832 [2024-12-06 15:37:11.756817] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.832 nvme0n1 00:20:06.090 15:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.090 Running I/O for 1 seconds... 00:20:07.024 5513.00 IOPS, 21.54 MiB/s 00:20:07.024 Latency(us) 00:20:07.024 [2024-12-06T14:37:13.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.024 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:07.024 Verification LBA range: start 0x0 length 0x2000 00:20:07.024 nvme0n1 : 1.01 5570.80 21.76 0.00 0.00 22827.51 5242.88 23343.30 00:20:07.024 [2024-12-06T14:37:13.022Z] =================================================================================================================== 00:20:07.024 [2024-12-06T14:37:13.022Z] Total : 5570.80 21.76 0.00 0.00 22827.51 5242.88 23343.30 00:20:07.024 { 00:20:07.024 "results": [ 00:20:07.024 { 00:20:07.024 "job": "nvme0n1", 00:20:07.024 "core_mask": "0x2", 00:20:07.024 "workload": "verify", 00:20:07.024 "status": "finished", 00:20:07.024 "verify_range": { 00:20:07.024 "start": 0, 00:20:07.024 "length": 8192 00:20:07.024 }, 00:20:07.024 "queue_depth": 128, 00:20:07.024 "io_size": 4096, 00:20:07.024 "runtime": 1.012781, 00:20:07.024 "iops": 5570.799610182261, 00:20:07.024 "mibps": 21.760935977274457, 00:20:07.024 "io_failed": 0, 00:20:07.024 "io_timeout": 0, 00:20:07.024 "avg_latency_us": 22827.513215509527, 00:20:07.024 "min_latency_us": 5242.88, 00:20:07.024 "max_latency_us": 23343.299047619046 00:20:07.024 } 00:20:07.024 ], 00:20:07.024 "core_count": 1 00:20:07.024 } 00:20:07.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3033706 00:20:07.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3033706 ']' 00:20:07.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3033706 00:20:07.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.024 15:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033706 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033706' 00:20:07.282 killing process with pid 3033706 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3033706 00:20:07.282 Received shutdown signal, test time was about 1.000000 seconds 00:20:07.282 00:20:07.282 Latency(us) 00:20:07.282 [2024-12-06T14:37:13.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.282 [2024-12-06T14:37:13.280Z] =================================================================================================================== 00:20:07.282 [2024-12-06T14:37:13.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3033706 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3033291 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3033291 ']' 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3033291 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033291 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033291' 00:20:07.282 killing process with pid 3033291 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3033291 00:20:07.282 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3033291 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3034004 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3034004 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3034004 ']' 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.540 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.540 [2024-12-06 15:37:13.465995] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:07.540 [2024-12-06 15:37:13.466046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.799 [2024-12-06 15:37:13.542511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.799 [2024-12-06 15:37:13.581136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.799 [2024-12-06 15:37:13.581171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.799 [2024-12-06 15:37:13.581179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.799 [2024-12-06 15:37:13.581185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.799 [2024-12-06 15:37:13.581191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.799 [2024-12-06 15:37:13.581757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.799 [2024-12-06 15:37:13.725005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.799 malloc0 00:20:07.799 [2024-12-06 15:37:13.753307] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.799 [2024-12-06 15:37:13.753511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3034097 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3034097 /var/tmp/bdevperf.sock 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3034097 ']' 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.799 15:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.057 [2024-12-06 15:37:13.829894] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:08.057 [2024-12-06 15:37:13.829934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034097 ] 00:20:08.057 [2024-12-06 15:37:13.903523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.057 [2024-12-06 15:37:13.945673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.057 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.057 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.057 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vGqeJPsI7M 00:20:08.315 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:08.573 [2024-12-06 15:37:14.387300] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.573 nvme0n1 00:20:08.573 15:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.573 Running I/O for 1 seconds... 00:20:09.945 5377.00 IOPS, 21.00 MiB/s 00:20:09.945 Latency(us) 00:20:09.945 [2024-12-06T14:37:15.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.945 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:09.945 Verification LBA range: start 0x0 length 0x2000 00:20:09.945 nvme0n1 : 1.01 5435.79 21.23 0.00 0.00 23390.02 5180.46 24217.11 00:20:09.945 [2024-12-06T14:37:15.943Z] =================================================================================================================== 00:20:09.945 [2024-12-06T14:37:15.944Z] Total : 5435.79 21.23 0.00 0.00 23390.02 5180.46 24217.11 00:20:09.946 { 00:20:09.946 "results": [ 00:20:09.946 { 00:20:09.946 "job": "nvme0n1", 00:20:09.946 "core_mask": "0x2", 00:20:09.946 "workload": "verify", 00:20:09.946 "status": "finished", 00:20:09.946 "verify_range": { 00:20:09.946 "start": 0, 00:20:09.946 "length": 8192 00:20:09.946 }, 00:20:09.946 "queue_depth": 128, 00:20:09.946 "io_size": 4096, 00:20:09.946 "runtime": 1.012917, 00:20:09.946 "iops": 5435.785952847074, 00:20:09.946 "mibps": 21.233538878308885, 00:20:09.946 "io_failed": 0, 00:20:09.946 "io_timeout": 0, 00:20:09.946 "avg_latency_us": 23390.022623285422, 00:20:09.946 "min_latency_us": 5180.464761904762, 00:20:09.946 "max_latency_us": 24217.11238095238 00:20:09.946 } 00:20:09.946 ], 00:20:09.946 "core_count": 1 00:20:09.946 } 00:20:09.946 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:09.946 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.946 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.946 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.946 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:09.946 "subsystems": [ 00:20:09.946 { 00:20:09.946 "subsystem": "keyring", 00:20:09.946 "config": [ 00:20:09.946 { 00:20:09.946 "method": "keyring_file_add_key", 00:20:09.946 "params": { 00:20:09.946 "name": "key0", 00:20:09.946 "path": "/tmp/tmp.vGqeJPsI7M" 00:20:09.946 } 00:20:09.946 } 00:20:09.946 ] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "iobuf", 00:20:09.946 "config": [ 00:20:09.946 { 00:20:09.946 "method": "iobuf_set_options", 00:20:09.946 "params": { 00:20:09.946 "small_pool_count": 8192, 00:20:09.946 "large_pool_count": 1024, 00:20:09.946 "small_bufsize": 8192, 00:20:09.946 "large_bufsize": 135168, 00:20:09.946 "enable_numa": false 00:20:09.946 } 00:20:09.946 } 00:20:09.946 ] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "sock", 00:20:09.946 "config": [ 00:20:09.946 { 00:20:09.946 "method": "sock_set_default_impl", 00:20:09.946 "params": { 00:20:09.946 "impl_name": "posix" 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "sock_impl_set_options", 00:20:09.946 "params": { 00:20:09.946 "impl_name": "ssl", 00:20:09.946 "recv_buf_size": 4096, 00:20:09.946 "send_buf_size": 4096, 00:20:09.946 "enable_recv_pipe": true, 00:20:09.946 "enable_quickack": false, 00:20:09.946 "enable_placement_id": 0, 00:20:09.946 "enable_zerocopy_send_server": true, 00:20:09.946 "enable_zerocopy_send_client": false, 00:20:09.946 "zerocopy_threshold": 0, 00:20:09.946 "tls_version": 0, 00:20:09.946 "enable_ktls": false 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "sock_impl_set_options", 00:20:09.946 "params": { 00:20:09.946 "impl_name": "posix", 00:20:09.946 "recv_buf_size": 2097152, 00:20:09.946 "send_buf_size": 2097152, 00:20:09.946 "enable_recv_pipe": true, 00:20:09.946 "enable_quickack": false, 00:20:09.946 "enable_placement_id": 0, 00:20:09.946 "enable_zerocopy_send_server": true, 00:20:09.946 "enable_zerocopy_send_client": false, 00:20:09.946 "zerocopy_threshold": 0, 00:20:09.946 "tls_version": 0, 00:20:09.946 "enable_ktls": false 00:20:09.946 } 00:20:09.946 } 00:20:09.946 ] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "vmd", 00:20:09.946 "config": [] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "accel", 00:20:09.946 "config": [ 00:20:09.946 { 00:20:09.946 "method": "accel_set_options", 00:20:09.946 "params": { 00:20:09.946 "small_cache_size": 128, 00:20:09.946 "large_cache_size": 16, 00:20:09.946 "task_count": 2048, 00:20:09.946 "sequence_count": 2048, 00:20:09.946 "buf_count": 2048 00:20:09.946 } 00:20:09.946 } 00:20:09.946 ] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "bdev", 00:20:09.946 "config": [ 00:20:09.946 { 00:20:09.946 "method": "bdev_set_options", 00:20:09.946 "params": { 00:20:09.946 "bdev_io_pool_size": 65535, 00:20:09.946 "bdev_io_cache_size": 256, 00:20:09.946 "bdev_auto_examine": true, 00:20:09.946 "iobuf_small_cache_size": 128, 00:20:09.946 "iobuf_large_cache_size": 16 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "bdev_raid_set_options", 00:20:09.946 "params": { 00:20:09.946 "process_window_size_kb": 1024, 00:20:09.946 "process_max_bandwidth_mb_sec": 0 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "bdev_iscsi_set_options", 00:20:09.946 "params": { 00:20:09.946 "timeout_sec": 30 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "bdev_nvme_set_options", 00:20:09.946 "params": { 00:20:09.946 "action_on_timeout": "none", 00:20:09.946 "timeout_us": 0, 00:20:09.946 "timeout_admin_us": 0, 00:20:09.946 "keep_alive_timeout_ms": 10000, 00:20:09.946 "arbitration_burst": 0, 00:20:09.946 "low_priority_weight": 0, 00:20:09.946 "medium_priority_weight": 0, 00:20:09.946 "high_priority_weight": 0, 00:20:09.946 "nvme_adminq_poll_period_us": 10000, 00:20:09.946 "nvme_ioq_poll_period_us": 0, 00:20:09.946 "io_queue_requests": 0, 00:20:09.946 "delay_cmd_submit": true, 00:20:09.946 "transport_retry_count": 4, 00:20:09.946 "bdev_retry_count": 3, 00:20:09.946 "transport_ack_timeout": 0, 00:20:09.946 "ctrlr_loss_timeout_sec": 0, 00:20:09.946 "reconnect_delay_sec": 0, 00:20:09.946 "fast_io_fail_timeout_sec": 0, 00:20:09.946 "disable_auto_failback": false, 00:20:09.946 "generate_uuids": false, 00:20:09.946 "transport_tos": 0, 00:20:09.946 "nvme_error_stat": false, 00:20:09.946 "rdma_srq_size": 0, 00:20:09.946 "io_path_stat": false, 00:20:09.946 "allow_accel_sequence": false, 00:20:09.946 "rdma_max_cq_size": 0, 00:20:09.946 "rdma_cm_event_timeout_ms": 0, 00:20:09.946 "dhchap_digests": [ 00:20:09.946 "sha256", 00:20:09.946 "sha384", 00:20:09.946 "sha512" 00:20:09.946 ], 00:20:09.946 "dhchap_dhgroups": [ 00:20:09.946 "null", 00:20:09.946 "ffdhe2048", 00:20:09.946 "ffdhe3072", 00:20:09.946 "ffdhe4096", 00:20:09.946 "ffdhe6144", 00:20:09.946 "ffdhe8192" 00:20:09.946 ] 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "bdev_nvme_set_hotplug", 00:20:09.946 "params": { 00:20:09.946 "period_us": 100000, 00:20:09.946 "enable": false 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "bdev_malloc_create", 00:20:09.946 "params": { 00:20:09.946 "name": "malloc0", 00:20:09.946 "num_blocks": 8192, 00:20:09.946 "block_size": 4096, 00:20:09.946 "physical_block_size": 4096, 00:20:09.946 "uuid": "84ee4d50-5c9c-4d67-b247-cb9a0b0510a9", 00:20:09.946 "optimal_io_boundary": 0, 00:20:09.946 "md_size": 0, 00:20:09.946 "dif_type": 0, 00:20:09.946 "dif_is_head_of_md": false, 00:20:09.946 "dif_pi_format": 0 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "bdev_wait_for_examine" 00:20:09.946 } 00:20:09.946 ] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "nbd", 00:20:09.946 "config": [] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "scheduler", 00:20:09.946 "config": [ 00:20:09.946 { 00:20:09.946 "method": "framework_set_scheduler", 00:20:09.946 "params": { 00:20:09.946 "name": "static" 00:20:09.946 } 00:20:09.946 } 00:20:09.946 ] 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "subsystem": "nvmf", 00:20:09.946 "config": [ 00:20:09.946 { 00:20:09.946 "method": "nvmf_set_config", 00:20:09.946 "params": { 00:20:09.946 "discovery_filter": "match_any", 00:20:09.946 "admin_cmd_passthru": { 00:20:09.946 "identify_ctrlr": false 00:20:09.946 }, 00:20:09.946 "dhchap_digests": [ 00:20:09.946 "sha256", 00:20:09.946 "sha384", 00:20:09.946 "sha512" 00:20:09.946 ], 00:20:09.946 "dhchap_dhgroups": [ 00:20:09.946 "null", 00:20:09.946 "ffdhe2048", 00:20:09.946 "ffdhe3072", 00:20:09.946 "ffdhe4096", 00:20:09.946 "ffdhe6144", 00:20:09.946 "ffdhe8192" 00:20:09.946 ] 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "nvmf_set_max_subsystems", 00:20:09.946 "params": { 00:20:09.946 "max_subsystems": 1024 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "nvmf_set_crdt", 00:20:09.946 "params": { 00:20:09.946 "crdt1": 0, 00:20:09.946 "crdt2": 0, 00:20:09.946 "crdt3": 0 00:20:09.946 } 00:20:09.946 }, 00:20:09.946 { 00:20:09.946 "method": "nvmf_create_transport", 00:20:09.946 "params": { 00:20:09.947 "trtype": "TCP", 00:20:09.947 "max_queue_depth": 128, 00:20:09.947 "max_io_qpairs_per_ctrlr": 127, 00:20:09.947 "in_capsule_data_size": 4096, 00:20:09.947 "max_io_size": 131072, 00:20:09.947 "io_unit_size": 131072, 00:20:09.947 "max_aq_depth": 128, 00:20:09.947 "num_shared_buffers": 511, 00:20:09.947 "buf_cache_size": 4294967295, 00:20:09.947 "dif_insert_or_strip": false, 00:20:09.947 "zcopy": false, 00:20:09.947 "c2h_success": false, 00:20:09.947 "sock_priority": 0, 00:20:09.947 "abort_timeout_sec": 1, 00:20:09.947 "ack_timeout": 0, 00:20:09.947 "data_wr_pool_size": 0 00:20:09.947 } 00:20:09.947 }, 00:20:09.947 { 00:20:09.947 "method": "nvmf_create_subsystem", 00:20:09.947 "params": { 00:20:09.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.947 "allow_any_host": false, 00:20:09.947 "serial_number": "00000000000000000000", 00:20:09.947 "model_number": "SPDK bdev Controller", 00:20:09.947 "max_namespaces": 32, 00:20:09.947 "min_cntlid": 1, 00:20:09.947 "max_cntlid": 65519, 00:20:09.947 "ana_reporting": false 00:20:09.947 } 00:20:09.947 }, 00:20:09.947 { 00:20:09.947 "method": "nvmf_subsystem_add_host", 00:20:09.947 "params": { 00:20:09.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.947 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.947 "psk": "key0" 00:20:09.947 } 00:20:09.947 }, 00:20:09.947 { 00:20:09.947 "method": "nvmf_subsystem_add_ns", 00:20:09.947 "params": { 00:20:09.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.947 "namespace": { 00:20:09.947 "nsid": 1, 00:20:09.947 "bdev_name": "malloc0", 00:20:09.947 "nguid": "84EE4D505C9C4D67B247CB9A0B0510A9", 00:20:09.947 "uuid": "84ee4d50-5c9c-4d67-b247-cb9a0b0510a9", 00:20:09.947 "no_auto_visible": false 00:20:09.947 } 00:20:09.947 } 00:20:09.947 }, 00:20:09.947 { 00:20:09.947 "method": "nvmf_subsystem_add_listener", 00:20:09.947 "params": { 00:20:09.947 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.947 "listen_address": { 00:20:09.947 "trtype": "TCP", 00:20:09.947 "adrfam": "IPv4", 00:20:09.947 "traddr": "10.0.0.2", 00:20:09.947 "trsvcid": "4420" 00:20:09.947 }, 00:20:09.947 "secure_channel": false, 00:20:09.947 "sock_impl": "ssl" 00:20:09.947 } 00:20:09.947 } 00:20:09.947 ] 00:20:09.947 } 00:20:09.947 ] 00:20:09.947 }' 00:20:09.947 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:10.206 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:10.206 "subsystems": [ 00:20:10.206 { 00:20:10.206 "subsystem": "keyring", 00:20:10.206 "config": [ 00:20:10.206 { 00:20:10.206 "method": "keyring_file_add_key", 00:20:10.206 "params": { 00:20:10.206 "name": "key0", 00:20:10.206 "path": "/tmp/tmp.vGqeJPsI7M" 00:20:10.206 } 00:20:10.206 } 00:20:10.206 ] 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "subsystem": "iobuf", 00:20:10.206 "config": [ 00:20:10.206 { 00:20:10.206 "method": "iobuf_set_options", 00:20:10.206 "params": { 00:20:10.206 "small_pool_count": 8192, 00:20:10.206 "large_pool_count": 1024, 00:20:10.206 "small_bufsize": 8192, 00:20:10.206 "large_bufsize": 135168, 00:20:10.206 "enable_numa": false 00:20:10.206 } 00:20:10.206 } 00:20:10.206 ] 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "subsystem": "sock", 00:20:10.206 "config": [ 00:20:10.206 { 00:20:10.206 "method": "sock_set_default_impl", 00:20:10.206 "params": { 00:20:10.206 "impl_name": "posix" 00:20:10.206 } 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "method": "sock_impl_set_options", 00:20:10.206 "params": { 00:20:10.206 "impl_name": "ssl", 00:20:10.206 "recv_buf_size": 4096, 00:20:10.206 "send_buf_size": 4096, 00:20:10.206 "enable_recv_pipe": true, 00:20:10.206 "enable_quickack": false, 00:20:10.206 "enable_placement_id": 0, 00:20:10.206 "enable_zerocopy_send_server": true, 00:20:10.206 "enable_zerocopy_send_client": false, 00:20:10.206 "zerocopy_threshold": 0, 00:20:10.206 "tls_version": 0, 00:20:10.206 "enable_ktls": false 00:20:10.206 } 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "method": "sock_impl_set_options", 00:20:10.206 "params": { 00:20:10.206 "impl_name": "posix", 00:20:10.206 "recv_buf_size": 2097152, 00:20:10.206 "send_buf_size": 2097152, 00:20:10.206 "enable_recv_pipe": true, 00:20:10.206 "enable_quickack": false, 00:20:10.206 "enable_placement_id": 0, 00:20:10.206 "enable_zerocopy_send_server": true, 00:20:10.206 "enable_zerocopy_send_client": false, 00:20:10.206 "zerocopy_threshold": 0, 00:20:10.206 "tls_version": 0, 00:20:10.206 "enable_ktls": false 00:20:10.206 } 00:20:10.206 } 00:20:10.206 ] 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "subsystem": "vmd", 00:20:10.206 "config": [] 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "subsystem": "accel", 00:20:10.206 "config": [ 00:20:10.206 { 00:20:10.206 "method": "accel_set_options", 00:20:10.206 "params": { 00:20:10.206 "small_cache_size": 128, 00:20:10.206 "large_cache_size": 16, 00:20:10.206 "task_count": 2048, 00:20:10.206 "sequence_count": 2048, 00:20:10.206 "buf_count": 2048 00:20:10.206 } 00:20:10.206 } 00:20:10.206 ] 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "subsystem": "bdev", 00:20:10.206 "config": [ 00:20:10.206 { 00:20:10.206 "method": "bdev_set_options", 00:20:10.206 "params": { 00:20:10.206 "bdev_io_pool_size": 65535, 00:20:10.206 "bdev_io_cache_size": 256, 00:20:10.206 "bdev_auto_examine": true, 00:20:10.206 "iobuf_small_cache_size": 128, 00:20:10.206 "iobuf_large_cache_size": 16 00:20:10.206 } 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "method": "bdev_raid_set_options", 00:20:10.206 "params": { 00:20:10.206 "process_window_size_kb": 1024, 00:20:10.206 "process_max_bandwidth_mb_sec": 0 00:20:10.206 } 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "method": "bdev_iscsi_set_options", 00:20:10.206 "params": { 00:20:10.206 "timeout_sec": 30 00:20:10.206 } 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "method": "bdev_nvme_set_options", 00:20:10.206 "params": { 00:20:10.206 "action_on_timeout": "none", 00:20:10.206 "timeout_us": 0, 00:20:10.206 "timeout_admin_us": 0, 00:20:10.206 "keep_alive_timeout_ms": 10000, 00:20:10.206 "arbitration_burst": 0, 00:20:10.206 "low_priority_weight": 0, 00:20:10.206 "medium_priority_weight": 0, 00:20:10.206 "high_priority_weight": 0, 00:20:10.206 "nvme_adminq_poll_period_us": 10000, 00:20:10.206 "nvme_ioq_poll_period_us": 0, 00:20:10.206 "io_queue_requests": 512, 00:20:10.206 "delay_cmd_submit": true, 00:20:10.206 "transport_retry_count": 4, 00:20:10.206 "bdev_retry_count": 3, 00:20:10.206 "transport_ack_timeout": 0, 00:20:10.206 "ctrlr_loss_timeout_sec": 0, 00:20:10.206 "reconnect_delay_sec": 0, 00:20:10.206 "fast_io_fail_timeout_sec": 0, 00:20:10.206 "disable_auto_failback": false, 00:20:10.206 "generate_uuids": false, 00:20:10.206 "transport_tos": 0, 00:20:10.206 "nvme_error_stat": false, 00:20:10.206 "rdma_srq_size": 0, 00:20:10.206 "io_path_stat": false, 00:20:10.206 "allow_accel_sequence": false, 00:20:10.206 "rdma_max_cq_size": 0, 00:20:10.206 "rdma_cm_event_timeout_ms": 0, 00:20:10.206 "dhchap_digests": [ 00:20:10.206 "sha256", 00:20:10.206 "sha384", 00:20:10.206 "sha512" 00:20:10.206 ], 00:20:10.206 "dhchap_dhgroups": [ 00:20:10.206 "null", 00:20:10.206 "ffdhe2048", 00:20:10.206 "ffdhe3072", 00:20:10.206 "ffdhe4096", 00:20:10.206 "ffdhe6144", 00:20:10.206 "ffdhe8192" 00:20:10.206 ] 00:20:10.206 } 00:20:10.206 }, 00:20:10.206 { 00:20:10.206 "method": "bdev_nvme_attach_controller", 00:20:10.206 "params": { 00:20:10.206 "name": "nvme0", 00:20:10.206 "trtype": "TCP", 00:20:10.206 "adrfam": "IPv4", 00:20:10.206 "traddr": "10.0.0.2", 00:20:10.206 "trsvcid": "4420", 00:20:10.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.206 "prchk_reftag": false, 00:20:10.206 "prchk_guard": false, 00:20:10.206 "ctrlr_loss_timeout_sec": 0, 00:20:10.207 "reconnect_delay_sec": 0, 00:20:10.207 "fast_io_fail_timeout_sec": 0, 00:20:10.207 "psk": "key0", 00:20:10.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.207 "hdgst": false, 00:20:10.207 "ddgst": false, 00:20:10.207 "multipath": "multipath" 00:20:10.207 } 00:20:10.207 }, 00:20:10.207 { 00:20:10.207 "method": "bdev_nvme_set_hotplug", 00:20:10.207 "params": { 00:20:10.207 "period_us": 100000, 00:20:10.207 "enable": false 00:20:10.207 } 00:20:10.207 }, 00:20:10.207 { 00:20:10.207 "method": "bdev_enable_histogram", 00:20:10.207 "params": { 00:20:10.207 "name": "nvme0n1", 00:20:10.207 "enable": true 00:20:10.207 } 00:20:10.207 }, 00:20:10.207 { 00:20:10.207 "method": "bdev_wait_for_examine" 00:20:10.207 } 00:20:10.207 ] 00:20:10.207 }, 00:20:10.207 { 00:20:10.207 "subsystem": "nbd", 00:20:10.207 "config": [] 00:20:10.207 } 00:20:10.207 ] 00:20:10.207 }' 00:20:10.207 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3034097 00:20:10.207 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3034097 ']' 00:20:10.207 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3034097 00:20:10.207 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.207 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.207 15:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3034097 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3034097' 00:20:10.207 killing process with pid 3034097 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3034097 00:20:10.207 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.207 00:20:10.207 Latency(us) 00:20:10.207 [2024-12-06T14:37:16.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.207 [2024-12-06T14:37:16.205Z] =================================================================================================================== 00:20:10.207 [2024-12-06T14:37:16.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3034097 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3034004 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3034004 ']' 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3034004 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.207 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3034004 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3034004' 00:20:10.466 killing process with pid 3034004 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3034004 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3034004 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.466 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:10.466 "subsystems": [ 00:20:10.466 { 00:20:10.466 "subsystem": "keyring", 00:20:10.466 "config": [ 00:20:10.466 { 00:20:10.466 "method": "keyring_file_add_key", 00:20:10.466 "params": { 00:20:10.466 "name": "key0", 00:20:10.466 "path": "/tmp/tmp.vGqeJPsI7M" 00:20:10.466 } 00:20:10.466 } 00:20:10.466 ] 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "subsystem": "iobuf", 00:20:10.466 "config": [ 00:20:10.466 { 00:20:10.466 "method": "iobuf_set_options", 00:20:10.466 "params": { 00:20:10.466 "small_pool_count": 8192, 00:20:10.466 "large_pool_count": 1024, 00:20:10.466 "small_bufsize": 8192, 00:20:10.466 "large_bufsize": 135168, 00:20:10.466 "enable_numa": false 00:20:10.466 } 00:20:10.466 } 00:20:10.466 ] 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "subsystem": "sock", 00:20:10.466 "config": [ 00:20:10.466 { 00:20:10.466 "method": "sock_set_default_impl", 00:20:10.466 "params": { 00:20:10.466 "impl_name": "posix" 00:20:10.466 } 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "method": "sock_impl_set_options", 00:20:10.466 "params": { 00:20:10.466 "impl_name": "ssl", 00:20:10.466 "recv_buf_size": 4096, 00:20:10.466 "send_buf_size": 4096, 00:20:10.466 "enable_recv_pipe": true, 00:20:10.466 "enable_quickack": false, 00:20:10.466 "enable_placement_id": 0, 00:20:10.466 "enable_zerocopy_send_server": true, 00:20:10.466 "enable_zerocopy_send_client": false, 00:20:10.466 "zerocopy_threshold": 0, 00:20:10.466 "tls_version": 0, 00:20:10.466 "enable_ktls": false 00:20:10.466 } 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "method": "sock_impl_set_options", 00:20:10.466 "params": { 00:20:10.466 "impl_name": "posix", 00:20:10.466 "recv_buf_size": 2097152, 00:20:10.466 "send_buf_size": 2097152, 00:20:10.466 "enable_recv_pipe": true, 00:20:10.466 "enable_quickack": false, 00:20:10.466 "enable_placement_id": 0, 00:20:10.466 "enable_zerocopy_send_server": true, 00:20:10.466 "enable_zerocopy_send_client": false, 00:20:10.466 "zerocopy_threshold": 0, 00:20:10.466 "tls_version": 0, 00:20:10.466 "enable_ktls": false 00:20:10.466 } 00:20:10.466 } 00:20:10.466 ] 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "subsystem": "vmd", 00:20:10.466 "config": [] 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "subsystem": "accel", 00:20:10.466 "config": [ 00:20:10.466 { 00:20:10.466 "method": "accel_set_options", 00:20:10.466 "params": { 00:20:10.466 "small_cache_size": 128, 00:20:10.466 "large_cache_size": 16, 00:20:10.466 "task_count": 2048, 00:20:10.466 "sequence_count": 2048, 00:20:10.466 "buf_count": 2048 00:20:10.466 } 00:20:10.466 } 00:20:10.466 ] 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "subsystem": "bdev", 00:20:10.466 "config": [ 00:20:10.466 { 00:20:10.466 "method": "bdev_set_options", 00:20:10.466 "params": { 00:20:10.466 "bdev_io_pool_size": 65535, 00:20:10.466 "bdev_io_cache_size": 256, 00:20:10.466 "bdev_auto_examine": true, 00:20:10.466 "iobuf_small_cache_size": 128, 00:20:10.466 "iobuf_large_cache_size": 16 00:20:10.466 } 00:20:10.466 }, 00:20:10.466 { 00:20:10.466 "method": "bdev_raid_set_options", 00:20:10.466 "params": { 00:20:10.466 "process_window_size_kb": 1024, 00:20:10.466 "process_max_bandwidth_mb_sec": 0 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "bdev_iscsi_set_options", 00:20:10.467 "params": { 00:20:10.467 "timeout_sec": 30 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "bdev_nvme_set_options", 00:20:10.467 "params": { 00:20:10.467 "action_on_timeout": "none", 00:20:10.467 "timeout_us": 0, 00:20:10.467 "timeout_admin_us": 0, 00:20:10.467 "keep_alive_timeout_ms": 10000, 00:20:10.467 "arbitration_burst": 0, 00:20:10.467 "low_priority_weight": 0, 00:20:10.467 "medium_priority_weight": 0, 00:20:10.467 "high_priority_weight": 0, 00:20:10.467 "nvme_adminq_poll_period_us": 10000, 00:20:10.467 "nvme_ioq_poll_period_us": 0, 00:20:10.467 "io_queue_requests": 0, 00:20:10.467 "delay_cmd_submit": true, 00:20:10.467 "transport_retry_count": 4, 00:20:10.467 "bdev_retry_count": 3, 00:20:10.467 "transport_ack_timeout": 0, 00:20:10.467 "ctrlr_loss_timeout_sec": 0, 00:20:10.467 "reconnect_delay_sec": 0, 00:20:10.467 "fast_io_fail_timeout_sec": 0, 00:20:10.467 "disable_auto_failback": false, 00:20:10.467 "generate_uuids": false, 00:20:10.467 "transport_tos": 0, 00:20:10.467 "nvme_error_stat": false, 00:20:10.467 "rdma_srq_size": 0, 00:20:10.467 "io_path_stat": false, 00:20:10.467 "allow_accel_sequence": false, 00:20:10.467 "rdma_max_cq_size": 0, 00:20:10.467 "rdma_cm_event_timeout_ms": 0, 00:20:10.467 "dhchap_digests": [ 00:20:10.467 "sha256", 00:20:10.467 "sha384", 00:20:10.467 "sha512" 00:20:10.467 ], 00:20:10.467 "dhchap_dhgroups": [ 00:20:10.467 "null", 00:20:10.467 "ffdhe2048", 00:20:10.467 "ffdhe3072", 00:20:10.467 "ffdhe4096", 00:20:10.467 "ffdhe6144", 00:20:10.467 "ffdhe8192" 00:20:10.467 ] 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "bdev_nvme_set_hotplug", 00:20:10.467 "params": { 00:20:10.467 "period_us": 100000, 00:20:10.467 "enable": false 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "bdev_malloc_create", 00:20:10.467 "params": { 00:20:10.467 "name": "malloc0", 00:20:10.467 "num_blocks": 8192, 00:20:10.467 "block_size": 4096, 00:20:10.467 "physical_block_size": 4096, 00:20:10.467 "uuid": "84ee4d50-5c9c-4d67-b247-cb9a0b0510a9", 00:20:10.467 "optimal_io_boundary": 0, 00:20:10.467 "md_size": 0, 00:20:10.467 "dif_type": 0, 00:20:10.467 "dif_is_head_of_md": false, 00:20:10.467 "dif_pi_format": 0 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "bdev_wait_for_examine" 00:20:10.467 } 00:20:10.467 ] 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "subsystem": "nbd", 00:20:10.467 "config": [] 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "subsystem": "scheduler", 00:20:10.467 "config": [ 00:20:10.467 { 00:20:10.467 "method": "framework_set_scheduler", 00:20:10.467 "params": { 00:20:10.467 "name": "static" 00:20:10.467 } 00:20:10.467 } 00:20:10.467 ] 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "subsystem": "nvmf", 00:20:10.467 "config": [ 00:20:10.467 { 00:20:10.467 "method": "nvmf_set_config", 00:20:10.467 "params": { 00:20:10.467 "discovery_filter": "match_any", 00:20:10.467 "admin_cmd_passthru": { 00:20:10.467 "identify_ctrlr": false 00:20:10.467 }, 00:20:10.467 "dhchap_digests": [ 00:20:10.467 "sha256", 00:20:10.467 "sha384", 00:20:10.467 "sha512" 00:20:10.467 ], 00:20:10.467 "dhchap_dhgroups": [ 00:20:10.467 "null", 00:20:10.467 "ffdhe2048", 00:20:10.467 "ffdhe3072", 00:20:10.467 "ffdhe4096", 00:20:10.467 "ffdhe6144", 00:20:10.467 "ffdhe8192" 00:20:10.467 ] 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "nvmf_set_max_subsystems", 00:20:10.467 "params": { 00:20:10.467 "max_subsystems": 1024 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "nvmf_set_crdt", 00:20:10.467 "params": { 00:20:10.467 "crdt1": 0, 00:20:10.467 "crdt2": 0, 00:20:10.467 "crdt3": 0 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "nvmf_create_transport", 00:20:10.467 "params": { 00:20:10.467 "trtype": "TCP", 00:20:10.467 "max_queue_depth": 128, 00:20:10.467 "max_io_qpairs_per_ctrlr": 127, 00:20:10.467 "in_capsule_data_size": 4096, 00:20:10.467 "max_io_size": 131072, 00:20:10.467 "io_unit_size": 131072, 00:20:10.467 "max_aq_depth": 128, 00:20:10.467 "num_shared_buffers": 511, 00:20:10.467 "buf_cache_size": 4294967295, 00:20:10.467 "dif_insert_or_strip": false, 00:20:10.467 "zcopy": false, 00:20:10.467 "c2h_success": false, 00:20:10.467 "sock_priority": 0, 00:20:10.467 "abort_timeout_sec": 1, 00:20:10.467 "ack_timeout": 0, 00:20:10.467 "data_wr_pool_size": 0 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "nvmf_create_subsystem", 00:20:10.467 "params": { 00:20:10.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.467 "allow_any_host": false, 00:20:10.467 "serial_number": "00000000000000000000", 00:20:10.467 "model_number": "SPDK bdev Controller", 00:20:10.467 "max_namespaces": 32, 00:20:10.467 "min_cntlid": 1, 00:20:10.467 "max_cntlid": 65519, 00:20:10.467 "ana_reporting": false 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "nvmf_subsystem_add_host", 00:20:10.467 "params": { 00:20:10.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.467 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.467 "psk": "key0" 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "nvmf_subsystem_add_ns", 00:20:10.467 "params": { 00:20:10.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.467 "namespace": { 00:20:10.467 "nsid": 1, 00:20:10.467 "bdev_name": "malloc0", 00:20:10.467 "nguid": "84EE4D505C9C4D67B247CB9A0B0510A9", 00:20:10.467 "uuid": "84ee4d50-5c9c-4d67-b247-cb9a0b0510a9", 00:20:10.467 "no_auto_visible": false 00:20:10.467 } 00:20:10.467 } 00:20:10.467 }, 00:20:10.467 { 00:20:10.467 "method": "nvmf_subsystem_add_listener", 00:20:10.467 "params": { 00:20:10.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.467 "listen_address": { 00:20:10.467 "trtype": "TCP", 00:20:10.467 "adrfam": "IPv4", 00:20:10.467 "traddr": "10.0.0.2", 00:20:10.467 "trsvcid": "4420" 00:20:10.467 }, 00:20:10.467 "secure_channel": false, 00:20:10.467 "sock_impl": "ssl" 00:20:10.467 } 00:20:10.467 } 00:20:10.467 ] 00:20:10.467 } 00:20:10.467 ] 00:20:10.467 }' 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3034498 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3034498 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3034498 ']' 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.467 15:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.726 [2024-12-06 15:37:16.466751] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:10.726 [2024-12-06 15:37:16.466799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.726 [2024-12-06 15:37:16.544751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.726 [2024-12-06 15:37:16.582280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.726 [2024-12-06 15:37:16.582313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.726 [2024-12-06 15:37:16.582320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.726 [2024-12-06 15:37:16.582326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.726 [2024-12-06 15:37:16.582331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.726 [2024-12-06 15:37:16.582942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.985 [2024-12-06 15:37:16.796525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.985 [2024-12-06 15:37:16.828553] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.985 [2024-12-06 15:37:16.828760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3034745 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3034745 /var/tmp/bdevperf.sock 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3034745 ']' 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.551 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:11.551 "subsystems": [ 00:20:11.551 { 00:20:11.551 "subsystem": "keyring", 00:20:11.551 "config": [ 00:20:11.551 { 00:20:11.551 "method": "keyring_file_add_key", 00:20:11.551 "params": { 00:20:11.551 "name": "key0", 00:20:11.551 "path": "/tmp/tmp.vGqeJPsI7M" 00:20:11.551 } 00:20:11.551 } 00:20:11.551 ] 00:20:11.551 }, 00:20:11.551 { 00:20:11.551 "subsystem": "iobuf", 00:20:11.551 "config": [ 00:20:11.551 { 00:20:11.551 "method": "iobuf_set_options", 00:20:11.551 "params": { 00:20:11.551 "small_pool_count": 8192, 00:20:11.551 "large_pool_count": 1024, 00:20:11.551 "small_bufsize": 8192, 00:20:11.551 "large_bufsize": 135168, 00:20:11.551 "enable_numa": false 00:20:11.551 } 00:20:11.551 } 00:20:11.551 ] 00:20:11.551 }, 00:20:11.551 { 00:20:11.551 "subsystem": "sock", 00:20:11.551 "config": [ 00:20:11.551 { 00:20:11.551 "method": "sock_set_default_impl", 00:20:11.551 "params": { 00:20:11.551 "impl_name": "posix" 00:20:11.551 } 00:20:11.551 }, 00:20:11.551 { 00:20:11.551 "method": "sock_impl_set_options", 00:20:11.551 "params": { 00:20:11.551 "impl_name": "ssl", 00:20:11.551 "recv_buf_size": 4096, 00:20:11.551 "send_buf_size": 4096, 00:20:11.551 "enable_recv_pipe": true, 00:20:11.551 "enable_quickack": false, 00:20:11.551 "enable_placement_id": 0, 00:20:11.551 "enable_zerocopy_send_server": true, 00:20:11.551 "enable_zerocopy_send_client": false, 00:20:11.551 "zerocopy_threshold": 0, 00:20:11.551 "tls_version": 0, 00:20:11.551 "enable_ktls": false 00:20:11.551 } 00:20:11.551 }, 00:20:11.551 { 00:20:11.551 "method": "sock_impl_set_options", 00:20:11.551 "params": { 00:20:11.551 "impl_name": "posix", 00:20:11.551 "recv_buf_size": 2097152, 00:20:11.551 "send_buf_size": 2097152, 00:20:11.551 "enable_recv_pipe": true, 00:20:11.551 "enable_quickack": false, 00:20:11.551 "enable_placement_id": 0, 00:20:11.551 "enable_zerocopy_send_server": true, 00:20:11.551 "enable_zerocopy_send_client": false, 00:20:11.551 "zerocopy_threshold": 0, 00:20:11.551 "tls_version": 0, 00:20:11.551 "enable_ktls": false 00:20:11.551 } 00:20:11.551 } 00:20:11.551 ] 00:20:11.551 }, 00:20:11.551 { 00:20:11.551 "subsystem": "vmd", 00:20:11.551 "config": [] 00:20:11.551 }, 00:20:11.551 { 00:20:11.551 "subsystem": "accel", 00:20:11.551 "config": [ 00:20:11.551 { 00:20:11.551 "method": "accel_set_options", 00:20:11.551 "params": { 00:20:11.551 "small_cache_size": 128, 00:20:11.551 "large_cache_size": 16, 00:20:11.551 "task_count": 2048, 00:20:11.551 "sequence_count": 2048, 00:20:11.551 "buf_count": 2048 00:20:11.551 } 00:20:11.551 } 00:20:11.551 ] 00:20:11.551 }, 00:20:11.551 { 00:20:11.551 "subsystem": "bdev", 00:20:11.551 "config": [ 00:20:11.551 { 00:20:11.551 "method": "bdev_set_options", 00:20:11.551 "params": { 00:20:11.551 "bdev_io_pool_size": 65535, 00:20:11.551 "bdev_io_cache_size": 256, 00:20:11.551 "bdev_auto_examine": true, 00:20:11.551 "iobuf_small_cache_size": 128, 00:20:11.552 "iobuf_large_cache_size": 16 00:20:11.552 } 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "method": "bdev_raid_set_options", 00:20:11.552 "params": { 00:20:11.552 "process_window_size_kb": 1024, 00:20:11.552 "process_max_bandwidth_mb_sec": 0 00:20:11.552 } 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "method": "bdev_iscsi_set_options", 00:20:11.552 "params": { 00:20:11.552 "timeout_sec": 30 00:20:11.552 } 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "method": "bdev_nvme_set_options", 00:20:11.552 "params": { 00:20:11.552 "action_on_timeout": "none", 00:20:11.552 "timeout_us": 0, 00:20:11.552 "timeout_admin_us": 0, 00:20:11.552 "keep_alive_timeout_ms": 10000, 00:20:11.552 "arbitration_burst": 0, 00:20:11.552 "low_priority_weight": 0, 00:20:11.552 "medium_priority_weight": 0, 00:20:11.552 "high_priority_weight": 0, 00:20:11.552 "nvme_adminq_poll_period_us": 10000, 00:20:11.552 "nvme_ioq_poll_period_us": 0, 00:20:11.552 "io_queue_requests": 512, 00:20:11.552 "delay_cmd_submit": true, 00:20:11.552 "transport_retry_count": 4, 00:20:11.552 "bdev_retry_count": 3, 00:20:11.552 "transport_ack_timeout": 0, 00:20:11.552 "ctrlr_loss_timeout_sec": 0, 00:20:11.552 "reconnect_delay_sec": 0, 00:20:11.552 "fast_io_fail_timeout_sec": 0, 00:20:11.552 "disable_auto_failback": false, 00:20:11.552 "generate_uuids": false, 00:20:11.552 "transport_tos": 0, 00:20:11.552 "nvme_error_stat": false, 00:20:11.552 "rdma_srq_size": 0, 00:20:11.552 "io_path_stat": false, 00:20:11.552 "allow_accel_sequence": false, 00:20:11.552 "rdma_max_cq_size": 0, 00:20:11.552 "rdma_cm_event_timeout_ms": 0, 00:20:11.552 "dhchap_digests": [ 00:20:11.552 "sha256", 00:20:11.552 "sha384", 00:20:11.552 "sha512" 00:20:11.552 ], 00:20:11.552 "dhchap_dhgroups": [ 00:20:11.552 "null", 00:20:11.552 "ffdhe2048", 00:20:11.552 "ffdhe3072", 00:20:11.552 "ffdhe4096", 00:20:11.552 "ffdhe6144", 00:20:11.552 "ffdhe8192" 00:20:11.552 ] 00:20:11.552 } 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "method": "bdev_nvme_attach_controller", 00:20:11.552 "params": { 00:20:11.552 "name": "nvme0", 00:20:11.552 "trtype": "TCP", 00:20:11.552 "adrfam": "IPv4", 00:20:11.552 "traddr": "10.0.0.2", 00:20:11.552 "trsvcid": "4420", 00:20:11.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.552 "prchk_reftag": false, 00:20:11.552 "prchk_guard": false, 00:20:11.552 "ctrlr_loss_timeout_sec": 0, 00:20:11.552 "reconnect_delay_sec": 0, 00:20:11.552 "fast_io_fail_timeout_sec": 0, 00:20:11.552 "psk": "key0", 00:20:11.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.552 "hdgst": false, 00:20:11.552 "ddgst": false, 00:20:11.552 "multipath": "multipath" 00:20:11.552 } 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "method": "bdev_nvme_set_hotplug", 00:20:11.552 "params": { 00:20:11.552 "period_us": 100000, 00:20:11.552 "enable": false 00:20:11.552 } 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "method": "bdev_enable_histogram", 00:20:11.552 "params": { 00:20:11.552 "name": "nvme0n1", 00:20:11.552 "enable": true 00:20:11.552 } 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "method": "bdev_wait_for_examine" 00:20:11.552 } 00:20:11.552 ] 00:20:11.552 }, 00:20:11.552 { 00:20:11.552 "subsystem": "nbd", 00:20:11.552 "config": [] 00:20:11.552 } 00:20:11.552 ] 00:20:11.552 }' 00:20:11.552 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.552 15:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.552 [2024-12-06 15:37:17.386238] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:11.552 [2024-12-06 15:37:17.386285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034745 ] 00:20:11.552 [2024-12-06 15:37:17.459262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.552 [2024-12-06 15:37:17.501697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.810 [2024-12-06 15:37:17.655804] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.402 15:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.402 15:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.402 15:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.402 15:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:12.659 15:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.659 15:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:12.659 Running I/O for 1 seconds... 00:20:13.591 5526.00 IOPS, 21.59 MiB/s 00:20:13.591 Latency(us) 00:20:13.591 [2024-12-06T14:37:19.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.591 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:13.591 Verification LBA range: start 0x0 length 0x2000 00:20:13.591 nvme0n1 : 1.02 5563.89 21.73 0.00 0.00 22815.34 4743.56 20971.52 00:20:13.591 [2024-12-06T14:37:19.589Z] =================================================================================================================== 00:20:13.591 [2024-12-06T14:37:19.589Z] Total : 5563.89 21.73 0.00 0.00 22815.34 4743.56 20971.52 00:20:13.591 { 00:20:13.591 "results": [ 00:20:13.591 { 00:20:13.591 "job": "nvme0n1", 00:20:13.591 "core_mask": "0x2", 00:20:13.591 "workload": "verify", 00:20:13.591 "status": "finished", 00:20:13.591 "verify_range": { 00:20:13.591 "start": 0, 00:20:13.591 "length": 8192 00:20:13.591 }, 00:20:13.591 "queue_depth": 128, 00:20:13.591 "io_size": 4096, 00:20:13.591 "runtime": 1.016195, 00:20:13.591 "iops": 5563.892756803566, 00:20:13.591 "mibps": 21.73395608126393, 00:20:13.591 "io_failed": 0, 00:20:13.591 "io_timeout": 0, 00:20:13.591 "avg_latency_us": 22815.338881196625, 00:20:13.591 "min_latency_us": 4743.558095238095, 00:20:13.591 "max_latency_us": 20971.52 00:20:13.591 } 00:20:13.591 ], 00:20:13.591 "core_count": 1 00:20:13.591 } 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:13.591 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:13.591 nvmf_trace.0 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3034745 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3034745 ']' 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3034745 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3034745 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3034745' 00:20:13.849 killing process with pid 3034745 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3034745 00:20:13.849 Received shutdown signal, test time was about 1.000000 seconds 00:20:13.849 00:20:13.849 Latency(us) 00:20:13.849 [2024-12-06T14:37:19.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.849 [2024-12-06T14:37:19.847Z] =================================================================================================================== 00:20:13.849 [2024-12-06T14:37:19.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.849 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3034745 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.108 rmmod nvme_tcp 00:20:14.108 rmmod nvme_fabrics 00:20:14.108 rmmod nvme_keyring 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3034498 ']' 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3034498 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3034498 ']' 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3034498 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3034498 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3034498' 00:20:14.108 killing process with pid 3034498 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3034498 00:20:14.108 15:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3034498 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.367 15:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.273 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.273 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.47p34iWV6v /tmp/tmp.OLnr7ru9kN /tmp/tmp.vGqeJPsI7M 00:20:16.273 00:20:16.273 real 1m19.789s 00:20:16.273 user 2m1.787s 00:20:16.273 sys 0m30.982s 00:20:16.273 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.273 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.273 ************************************ 00:20:16.273 END TEST nvmf_tls 00:20:16.273 ************************************ 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.533 ************************************ 00:20:16.533 START TEST nvmf_fips 00:20:16.533 ************************************ 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:16.533 * Looking for test storage... 00:20:16.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.533 --rc genhtml_branch_coverage=1 00:20:16.533 --rc genhtml_function_coverage=1 00:20:16.533 --rc genhtml_legend=1 00:20:16.533 --rc geninfo_all_blocks=1 00:20:16.533 --rc geninfo_unexecuted_blocks=1 00:20:16.533 00:20:16.533 ' 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.533 --rc genhtml_branch_coverage=1 00:20:16.533 --rc genhtml_function_coverage=1 00:20:16.533 --rc genhtml_legend=1 00:20:16.533 --rc geninfo_all_blocks=1 00:20:16.533 --rc geninfo_unexecuted_blocks=1 00:20:16.533 00:20:16.533 ' 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.533 --rc genhtml_branch_coverage=1 00:20:16.533 --rc genhtml_function_coverage=1 00:20:16.533 --rc genhtml_legend=1 00:20:16.533 --rc geninfo_all_blocks=1 00:20:16.533 --rc geninfo_unexecuted_blocks=1 00:20:16.533 00:20:16.533 ' 00:20:16.533 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:16.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.533 --rc genhtml_branch_coverage=1 00:20:16.533 --rc genhtml_function_coverage=1 00:20:16.533 --rc genhtml_legend=1 00:20:16.533 --rc geninfo_all_blocks=1 00:20:16.534 --rc geninfo_unexecuted_blocks=1 00:20:16.534 00:20:16.534 ' 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.534 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:16.795 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:16.795 Error setting digest 00:20:16.795 4022E53ECB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:16.796 4022E53ECB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.796 15:37:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.361 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.361 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:23.361 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:23.361 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:23.361 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:23.361 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:23.361 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:23.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:23.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:23.362 Found net devices under 0000:86:00.0: cvl_0_0 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:23.362 Found net devices under 0000:86:00.1: cvl_0_1 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:23.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:20:23.362 00:20:23.362 --- 10.0.0.2 ping statistics --- 00:20:23.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.362 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:20:23.362 00:20:23.362 --- 10.0.0.1 ping statistics --- 00:20:23.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.362 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3038768 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3038768 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3038768 ']' 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.362 15:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.362 [2024-12-06 15:37:28.753444] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:23.362 [2024-12-06 15:37:28.753497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.362 [2024-12-06 15:37:28.833863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.362 [2024-12-06 15:37:28.875809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.362 [2024-12-06 15:37:28.875843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.362 [2024-12-06 15:37:28.875850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.362 [2024-12-06 15:37:28.875857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.362 [2024-12-06 15:37:28.875862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.362 [2024-12-06 15:37:28.876423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.gs1 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.gs1 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.gs1 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.gs1 00:20:23.622 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.880 [2024-12-06 15:37:29.782597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.880 [2024-12-06 15:37:29.798602] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.880 [2024-12-06 15:37:29.798782] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.880 malloc0 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3039019 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3039019 /var/tmp/bdevperf.sock 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3039019 ']' 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.880 15:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.180 [2024-12-06 15:37:29.927438] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:24.180 [2024-12-06 15:37:29.927488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039019 ] 00:20:24.180 [2024-12-06 15:37:29.999531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.180 [2024-12-06 15:37:30.046428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.113 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.113 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:25.113 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.gs1 00:20:25.113 15:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.371 [2024-12-06 15:37:31.112756] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.371 TLSTESTn1 00:20:25.371 15:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:25.371 Running I/O for 10 seconds... 00:20:27.683 5399.00 IOPS, 21.09 MiB/s [2024-12-06T14:37:34.618Z] 5448.50 IOPS, 21.28 MiB/s [2024-12-06T14:37:35.555Z] 5530.00 IOPS, 21.60 MiB/s [2024-12-06T14:37:36.489Z] 5553.00 IOPS, 21.69 MiB/s [2024-12-06T14:37:37.423Z] 5556.20 IOPS, 21.70 MiB/s [2024-12-06T14:37:38.386Z] 5570.33 IOPS, 21.76 MiB/s [2024-12-06T14:37:39.389Z] 5583.86 IOPS, 21.81 MiB/s [2024-12-06T14:37:40.326Z] 5586.88 IOPS, 21.82 MiB/s [2024-12-06T14:37:41.703Z] 5584.89 IOPS, 21.82 MiB/s [2024-12-06T14:37:41.704Z] 5581.50 IOPS, 21.80 MiB/s 00:20:35.706 Latency(us) 00:20:35.706 [2024-12-06T14:37:41.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.706 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.706 Verification LBA range: start 0x0 length 0x2000 00:20:35.706 TLSTESTn1 : 10.01 5587.21 21.83 0.00 0.00 22876.70 5149.26 25215.76 00:20:35.706 [2024-12-06T14:37:41.704Z] =================================================================================================================== 00:20:35.706 [2024-12-06T14:37:41.704Z] Total : 5587.21 21.83 0.00 0.00 22876.70 5149.26 25215.76 00:20:35.706 { 00:20:35.706 "results": [ 00:20:35.706 { 00:20:35.706 "job": "TLSTESTn1", 00:20:35.706 "core_mask": "0x4", 00:20:35.706 "workload": "verify", 00:20:35.706 "status": "finished", 00:20:35.706 "verify_range": { 00:20:35.706 "start": 0, 00:20:35.706 "length": 8192 00:20:35.706 }, 00:20:35.706 "queue_depth": 128, 00:20:35.706 "io_size": 4096, 00:20:35.706 "runtime": 10.012153, 00:20:35.706 "iops": 5587.20986385246, 00:20:35.706 "mibps": 21.825038530673673, 00:20:35.706 "io_failed": 0, 00:20:35.706 "io_timeout": 0, 00:20:35.706 "avg_latency_us": 22876.704696800993, 00:20:35.706 "min_latency_us": 5149.257142857143, 00:20:35.706 "max_latency_us": 25215.75619047619 00:20:35.706 } 00:20:35.706 ], 00:20:35.706 "core_count": 1 00:20:35.706 } 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:35.706 nvmf_trace.0 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3039019 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3039019 ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3039019 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039019 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039019' 00:20:35.706 killing process with pid 3039019 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3039019 00:20:35.706 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.706 00:20:35.706 Latency(us) 00:20:35.706 [2024-12-06T14:37:41.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.706 [2024-12-06T14:37:41.704Z] =================================================================================================================== 00:20:35.706 [2024-12-06T14:37:41.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3039019 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.706 rmmod nvme_tcp 00:20:35.706 rmmod nvme_fabrics 00:20:35.706 rmmod nvme_keyring 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3038768 ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3038768 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3038768 ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3038768 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.706 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3038768 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3038768' 00:20:35.965 killing process with pid 3038768 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3038768 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3038768 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.965 15:37:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.501 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:38.501 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.gs1 00:20:38.501 00:20:38.501 real 0m21.685s 00:20:38.501 user 0m23.328s 00:20:38.501 sys 0m9.731s 00:20:38.501 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.501 15:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.501 ************************************ 00:20:38.501 END TEST nvmf_fips 00:20:38.501 ************************************ 00:20:38.501 15:37:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:38.501 15:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:38.501 15:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.501 15:37:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.501 ************************************ 00:20:38.501 START TEST nvmf_control_msg_list 00:20:38.501 ************************************ 00:20:38.501 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:38.501 * Looking for test storage... 00:20:38.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.502 --rc genhtml_branch_coverage=1 00:20:38.502 --rc genhtml_function_coverage=1 00:20:38.502 --rc genhtml_legend=1 00:20:38.502 --rc geninfo_all_blocks=1 00:20:38.502 --rc geninfo_unexecuted_blocks=1 00:20:38.502 00:20:38.502 ' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.502 --rc genhtml_branch_coverage=1 00:20:38.502 --rc genhtml_function_coverage=1 00:20:38.502 --rc genhtml_legend=1 00:20:38.502 --rc geninfo_all_blocks=1 00:20:38.502 --rc geninfo_unexecuted_blocks=1 00:20:38.502 00:20:38.502 ' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.502 --rc genhtml_branch_coverage=1 00:20:38.502 --rc genhtml_function_coverage=1 00:20:38.502 --rc genhtml_legend=1 00:20:38.502 --rc geninfo_all_blocks=1 00:20:38.502 --rc geninfo_unexecuted_blocks=1 00:20:38.502 00:20:38.502 ' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:38.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.502 --rc genhtml_branch_coverage=1 00:20:38.502 --rc genhtml_function_coverage=1 00:20:38.502 --rc genhtml_legend=1 00:20:38.502 --rc geninfo_all_blocks=1 00:20:38.502 --rc geninfo_unexecuted_blocks=1 00:20:38.502 00:20:38.502 ' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.502 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:38.503 15:37:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.066 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:45.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:45.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:45.067 Found net devices under 0000:86:00.0: cvl_0_0 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:45.067 Found net devices under 0000:86:00.1: cvl_0_1 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.067 15:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:20:45.067 00:20:45.067 --- 10.0.0.2 ping statistics --- 00:20:45.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.067 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:20:45.067 00:20:45.067 --- 10.0.0.1 ping statistics --- 00:20:45.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.067 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.067 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3044396 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3044396 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3044396 ']' 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.068 15:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.068 [2024-12-06 15:37:50.249250] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:45.068 [2024-12-06 15:37:50.249294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.068 [2024-12-06 15:37:50.328791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.068 [2024-12-06 15:37:50.369514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.068 [2024-12-06 15:37:50.369545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.068 [2024-12-06 15:37:50.369552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.068 [2024-12-06 15:37:50.369558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.068 [2024-12-06 15:37:50.369563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.068 [2024-12-06 15:37:50.370121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.326 [2024-12-06 15:37:51.112040] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.326 Malloc0 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.326 [2024-12-06 15:37:51.148451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3044623 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3044625 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3044627 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.326 15:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3044623 00:20:45.326 [2024-12-06 15:37:51.237024] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.326 [2024-12-06 15:37:51.237183] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.326 [2024-12-06 15:37:51.246785] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:46.699 Initializing NVMe Controllers 00:20:46.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:46.699 Initialization complete. Launching workers. 00:20:46.699 ======================================================== 00:20:46.699 Latency(us) 00:20:46.699 Device Information : IOPS MiB/s Average min max 00:20:46.699 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6369.00 24.88 156.66 136.19 386.81 00:20:46.699 ======================================================== 00:20:46.699 Total : 6369.00 24.88 156.66 136.19 386.81 00:20:46.699 00:20:46.699 Initializing NVMe Controllers 00:20:46.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:46.699 Initialization complete. Launching workers. 00:20:46.699 ======================================================== 00:20:46.699 Latency(us) 00:20:46.699 Device Information : IOPS MiB/s Average min max 00:20:46.699 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6367.00 24.87 156.72 136.38 431.15 00:20:46.699 ======================================================== 00:20:46.699 Total : 6367.00 24.87 156.72 136.38 431.15 00:20:46.699 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3044625 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3044627 00:20:46.699 Initializing NVMe Controllers 00:20:46.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:46.699 Initialization complete. Launching workers. 00:20:46.699 ======================================================== 00:20:46.699 Latency(us) 00:20:46.699 Device Information : IOPS MiB/s Average min max 00:20:46.699 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40952.98 40820.72 41938.08 00:20:46.699 ======================================================== 00:20:46.699 Total : 25.00 0.10 40952.98 40820.72 41938.08 00:20:46.699 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.699 rmmod nvme_tcp 00:20:46.699 rmmod nvme_fabrics 00:20:46.699 rmmod nvme_keyring 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3044396 ']' 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3044396 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3044396 ']' 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3044396 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044396 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044396' 00:20:46.699 killing process with pid 3044396 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3044396 00:20:46.699 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3044396 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.959 15:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:49.495 00:20:49.495 real 0m10.810s 00:20:49.495 user 0m7.488s 00:20:49.495 sys 0m5.628s 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:49.495 ************************************ 00:20:49.495 END TEST nvmf_control_msg_list 00:20:49.495 ************************************ 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:49.495 ************************************ 00:20:49.495 START TEST nvmf_wait_for_buf 00:20:49.495 ************************************ 00:20:49.495 15:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:49.495 * Looking for test storage... 00:20:49.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:49.495 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:49.495 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:49.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.496 --rc genhtml_branch_coverage=1 00:20:49.496 --rc genhtml_function_coverage=1 00:20:49.496 --rc genhtml_legend=1 00:20:49.496 --rc geninfo_all_blocks=1 00:20:49.496 --rc geninfo_unexecuted_blocks=1 00:20:49.496 00:20:49.496 ' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:49.496 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.497 15:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:56.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:56.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.067 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:56.067 Found net devices under 0000:86:00.0: cvl_0_0 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:56.068 Found net devices under 0000:86:00.1: cvl_0_1 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.068 15:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:20:56.068 00:20:56.068 --- 10.0.0.2 ping statistics --- 00:20:56.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.068 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:20:56.068 00:20:56.068 --- 10.0.0.1 ping statistics --- 00:20:56.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.068 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3048366 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3048366 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3048366 ']' 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 [2024-12-06 15:38:01.148675] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:20:56.068 [2024-12-06 15:38:01.148721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.068 [2024-12-06 15:38:01.227446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.068 [2024-12-06 15:38:01.267663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.068 [2024-12-06 15:38:01.267696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.068 [2024-12-06 15:38:01.267703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.068 [2024-12-06 15:38:01.267709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.068 [2024-12-06 15:38:01.267714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.068 [2024-12-06 15:38:01.268250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 Malloc0 00:20:56.068 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 [2024-12-06 15:38:01.437429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 [2024-12-06 15:38:01.465620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.069 15:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.069 [2024-12-06 15:38:01.551261] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:57.004 Initializing NVMe Controllers 00:20:57.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:57.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:57.004 Initialization complete. Launching workers. 00:20:57.004 ======================================================== 00:20:57.004 Latency(us) 00:20:57.004 Device Information : IOPS MiB/s Average min max 00:20:57.004 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32239.10 7267.97 63849.92 00:20:57.004 ======================================================== 00:20:57.004 Total : 129.00 16.12 32239.10 7267.97 63849.92 00:20:57.004 00:20:57.004 15:38:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:57.004 15:38:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:57.004 15:38:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.004 15:38:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:57.004 15:38:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.263 rmmod nvme_tcp 00:20:57.263 rmmod nvme_fabrics 00:20:57.263 rmmod nvme_keyring 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3048366 ']' 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3048366 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3048366 ']' 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3048366 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3048366 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3048366' 00:20:57.263 killing process with pid 3048366 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3048366 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3048366 00:20:57.263 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.264 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.264 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.264 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:57.264 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:57.264 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.264 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.523 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.523 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.523 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.523 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.523 15:38:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:59.428 00:20:59.428 real 0m10.378s 00:20:59.428 user 0m3.869s 00:20:59.428 sys 0m4.937s 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.428 ************************************ 00:20:59.428 END TEST nvmf_wait_for_buf 00:20:59.428 ************************************ 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.428 15:38:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.999 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.000 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.000 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.000 15:38:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:06.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:06.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:06.000 Found net devices under 0000:86:00.0: cvl_0_0 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:06.000 Found net devices under 0000:86:00.1: cvl_0_1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:06.000 ************************************ 00:21:06.000 START TEST nvmf_perf_adq 00:21:06.000 ************************************ 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:06.000 * Looking for test storage... 00:21:06.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:06.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.000 --rc genhtml_branch_coverage=1 00:21:06.000 --rc genhtml_function_coverage=1 00:21:06.000 --rc genhtml_legend=1 00:21:06.000 --rc geninfo_all_blocks=1 00:21:06.000 --rc geninfo_unexecuted_blocks=1 00:21:06.000 00:21:06.000 ' 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:06.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.000 --rc genhtml_branch_coverage=1 00:21:06.000 --rc genhtml_function_coverage=1 00:21:06.000 --rc genhtml_legend=1 00:21:06.000 --rc geninfo_all_blocks=1 00:21:06.000 --rc geninfo_unexecuted_blocks=1 00:21:06.000 00:21:06.000 ' 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:06.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.000 --rc genhtml_branch_coverage=1 00:21:06.000 --rc genhtml_function_coverage=1 00:21:06.000 --rc genhtml_legend=1 00:21:06.000 --rc geninfo_all_blocks=1 00:21:06.000 --rc geninfo_unexecuted_blocks=1 00:21:06.000 00:21:06.000 ' 00:21:06.000 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:06.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.000 --rc genhtml_branch_coverage=1 00:21:06.000 --rc genhtml_function_coverage=1 00:21:06.001 --rc genhtml_legend=1 00:21:06.001 --rc geninfo_all_blocks=1 00:21:06.001 --rc geninfo_unexecuted_blocks=1 00:21:06.001 00:21:06.001 ' 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.001 15:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.296 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.297 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.297 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:11.297 15:38:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:12.234 15:38:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:14.141 15:38:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:19.418 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:19.418 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:19.418 Found net devices under 0000:86:00.0: cvl_0_0 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:19.418 Found net devices under 0000:86:00.1: cvl_0_1 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.418 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:21:19.419 00:21:19.419 --- 10.0.0.2 ping statistics --- 00:21:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.419 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:21:19.419 00:21:19.419 --- 10.0.0.1 ping statistics --- 00:21:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.419 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3056619 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3056619 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3056619 ']' 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.419 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.419 [2024-12-06 15:38:25.382284] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:21:19.419 [2024-12-06 15:38:25.382330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.676 [2024-12-06 15:38:25.460400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.676 [2024-12-06 15:38:25.502290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.676 [2024-12-06 15:38:25.502328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.676 [2024-12-06 15:38:25.502336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.676 [2024-12-06 15:38:25.502342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.676 [2024-12-06 15:38:25.502347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.676 [2024-12-06 15:38:25.503924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.676 [2024-12-06 15:38:25.504040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.676 [2024-12-06 15:38:25.504126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.676 [2024-12-06 15:38:25.504128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.676 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.677 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.677 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:19.677 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.677 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.934 [2024-12-06 15:38:25.715070] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.934 Malloc1 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.934 [2024-12-06 15:38:25.778460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3056772 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:19.934 15:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:21.832 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:21.832 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.832 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:21.832 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.832 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:21.832 "tick_rate": 2100000000, 00:21:21.832 "poll_groups": [ 00:21:21.832 { 00:21:21.832 "name": "nvmf_tgt_poll_group_000", 00:21:21.832 "admin_qpairs": 1, 00:21:21.832 "io_qpairs": 1, 00:21:21.832 "current_admin_qpairs": 1, 00:21:21.832 "current_io_qpairs": 1, 00:21:21.832 "pending_bdev_io": 0, 00:21:21.832 "completed_nvme_io": 19982, 00:21:21.832 "transports": [ 00:21:21.832 { 00:21:21.832 "trtype": "TCP" 00:21:21.832 } 00:21:21.832 ] 00:21:21.832 }, 00:21:21.832 { 00:21:21.832 "name": "nvmf_tgt_poll_group_001", 00:21:21.832 "admin_qpairs": 0, 00:21:21.832 "io_qpairs": 1, 00:21:21.832 "current_admin_qpairs": 0, 00:21:21.832 "current_io_qpairs": 1, 00:21:21.832 "pending_bdev_io": 0, 00:21:21.832 "completed_nvme_io": 20499, 00:21:21.832 "transports": [ 00:21:21.832 { 00:21:21.832 "trtype": "TCP" 00:21:21.832 } 00:21:21.832 ] 00:21:21.832 }, 00:21:21.832 { 00:21:21.832 "name": "nvmf_tgt_poll_group_002", 00:21:21.832 "admin_qpairs": 0, 00:21:21.832 "io_qpairs": 1, 00:21:21.832 "current_admin_qpairs": 0, 00:21:21.832 "current_io_qpairs": 1, 00:21:21.832 "pending_bdev_io": 0, 00:21:21.832 "completed_nvme_io": 20154, 00:21:21.832 "transports": [ 00:21:21.832 { 00:21:21.832 "trtype": "TCP" 00:21:21.832 } 00:21:21.832 ] 00:21:21.832 }, 00:21:21.832 { 00:21:21.832 "name": "nvmf_tgt_poll_group_003", 00:21:21.832 "admin_qpairs": 0, 00:21:21.832 "io_qpairs": 1, 00:21:21.832 "current_admin_qpairs": 0, 00:21:21.832 "current_io_qpairs": 1, 00:21:21.832 "pending_bdev_io": 0, 00:21:21.832 "completed_nvme_io": 20030, 00:21:21.832 "transports": [ 00:21:21.832 { 00:21:21.832 "trtype": "TCP" 00:21:21.832 } 00:21:21.832 ] 00:21:21.832 } 00:21:21.832 ] 00:21:21.832 }' 00:21:21.832 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:21.832 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:22.089 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:22.089 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:22.089 15:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3056772 00:21:30.188 Initializing NVMe Controllers 00:21:30.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:30.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:30.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:30.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:30.188 Initialization complete. Launching workers. 00:21:30.188 ======================================================== 00:21:30.188 Latency(us) 00:21:30.188 Device Information : IOPS MiB/s Average min max 00:21:30.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10365.70 40.49 6173.85 1872.37 10462.33 00:21:30.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10641.30 41.57 6014.05 2306.02 10200.84 00:21:30.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10461.00 40.86 6117.43 2008.62 10468.72 00:21:30.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10496.60 41.00 6098.25 1848.69 10791.42 00:21:30.188 ======================================================== 00:21:30.188 Total : 41964.59 163.92 6100.35 1848.69 10791.42 00:21:30.188 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.188 rmmod nvme_tcp 00:21:30.188 rmmod nvme_fabrics 00:21:30.188 rmmod nvme_keyring 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3056619 ']' 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3056619 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3056619 ']' 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3056619 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.188 15:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056619 00:21:30.188 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.188 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.188 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056619' 00:21:30.188 killing process with pid 3056619 00:21:30.188 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3056619 00:21:30.188 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3056619 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.448 15:38:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.355 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:32.355 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:32.355 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:32.355 15:38:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:33.735 15:38:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:35.640 15:38:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.067 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:41.068 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:41.068 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:41.068 Found net devices under 0000:86:00.0: cvl_0_0 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:41.068 Found net devices under 0000:86:00.1: cvl_0_1 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:21:41.068 00:21:41.068 --- 10.0.0.2 ping statistics --- 00:21:41.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.068 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:41.068 00:21:41.068 --- 10.0.0.1 ping statistics --- 00:21:41.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.068 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:41.068 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:41.069 net.core.busy_poll = 1 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:41.069 net.core.busy_read = 1 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3060553 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3060553 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3060553 ']' 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.069 15:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.069 [2024-12-06 15:38:47.036065] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:21:41.069 [2024-12-06 15:38:47.036121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.328 [2024-12-06 15:38:47.114086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.328 [2024-12-06 15:38:47.156599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.328 [2024-12-06 15:38:47.156639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.328 [2024-12-06 15:38:47.156646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.328 [2024-12-06 15:38:47.156652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.328 [2024-12-06 15:38:47.156657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.328 [2024-12-06 15:38:47.158162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.328 [2024-12-06 15:38:47.158272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.328 [2024-12-06 15:38:47.158403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.328 [2024-12-06 15:38:47.158404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.942 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.200 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:42.201 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:42.201 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.201 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.201 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:42.201 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.201 15:38:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 [2024-12-06 15:38:48.041840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 Malloc1 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.201 [2024-12-06 15:38:48.102375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3060690 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:42.201 15:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:44.730 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:44.730 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.730 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:44.731 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.731 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:44.731 "tick_rate": 2100000000, 00:21:44.731 "poll_groups": [ 00:21:44.731 { 00:21:44.731 "name": "nvmf_tgt_poll_group_000", 00:21:44.731 "admin_qpairs": 1, 00:21:44.731 "io_qpairs": 3, 00:21:44.731 "current_admin_qpairs": 1, 00:21:44.731 "current_io_qpairs": 3, 00:21:44.731 "pending_bdev_io": 0, 00:21:44.731 "completed_nvme_io": 28744, 00:21:44.731 "transports": [ 00:21:44.731 { 00:21:44.731 "trtype": "TCP" 00:21:44.731 } 00:21:44.731 ] 00:21:44.731 }, 00:21:44.731 { 00:21:44.731 "name": "nvmf_tgt_poll_group_001", 00:21:44.731 "admin_qpairs": 0, 00:21:44.731 "io_qpairs": 1, 00:21:44.731 "current_admin_qpairs": 0, 00:21:44.731 "current_io_qpairs": 1, 00:21:44.731 "pending_bdev_io": 0, 00:21:44.731 "completed_nvme_io": 28907, 00:21:44.731 "transports": [ 00:21:44.731 { 00:21:44.731 "trtype": "TCP" 00:21:44.731 } 00:21:44.731 ] 00:21:44.731 }, 00:21:44.731 { 00:21:44.731 "name": "nvmf_tgt_poll_group_002", 00:21:44.731 "admin_qpairs": 0, 00:21:44.731 "io_qpairs": 0, 00:21:44.731 "current_admin_qpairs": 0, 00:21:44.731 "current_io_qpairs": 0, 00:21:44.731 "pending_bdev_io": 0, 00:21:44.731 "completed_nvme_io": 0, 00:21:44.731 "transports": [ 00:21:44.731 { 00:21:44.731 "trtype": "TCP" 00:21:44.731 } 00:21:44.731 ] 00:21:44.731 }, 00:21:44.731 { 00:21:44.731 "name": "nvmf_tgt_poll_group_003", 00:21:44.731 "admin_qpairs": 0, 00:21:44.731 "io_qpairs": 0, 00:21:44.731 "current_admin_qpairs": 0, 00:21:44.731 "current_io_qpairs": 0, 00:21:44.731 "pending_bdev_io": 0, 00:21:44.731 "completed_nvme_io": 0, 00:21:44.731 "transports": [ 00:21:44.731 { 00:21:44.731 "trtype": "TCP" 00:21:44.731 } 00:21:44.731 ] 00:21:44.731 } 00:21:44.731 ] 00:21:44.731 }' 00:21:44.731 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:44.731 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:44.731 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:44.731 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:44.731 15:38:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3060690 00:21:52.850 Initializing NVMe Controllers 00:21:52.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:52.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:52.850 Initialization complete. Launching workers. 00:21:52.850 ======================================================== 00:21:52.850 Latency(us) 00:21:52.850 Device Information : IOPS MiB/s Average min max 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4767.70 18.62 13426.64 1814.10 60065.45 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5365.10 20.96 11931.94 1857.60 57876.25 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5084.40 19.86 12589.87 1543.50 58161.82 00:21:52.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15557.90 60.77 4113.00 1448.83 6747.20 00:21:52.850 ======================================================== 00:21:52.850 Total : 30775.10 120.22 8319.45 1448.83 60065.45 00:21:52.850 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.850 rmmod nvme_tcp 00:21:52.850 rmmod nvme_fabrics 00:21:52.850 rmmod nvme_keyring 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3060553 ']' 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3060553 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3060553 ']' 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3060553 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060553 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060553' 00:21:52.850 killing process with pid 3060553 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3060553 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3060553 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.850 15:38:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:56.140 00:21:56.140 real 0m50.634s 00:21:56.140 user 2m46.464s 00:21:56.140 sys 0m10.408s 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.140 ************************************ 00:21:56.140 END TEST nvmf_perf_adq 00:21:56.140 ************************************ 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:56.140 ************************************ 00:21:56.140 START TEST nvmf_shutdown 00:21:56.140 ************************************ 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:56.140 * Looking for test storage... 00:21:56.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.140 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:56.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.141 --rc genhtml_branch_coverage=1 00:21:56.141 --rc genhtml_function_coverage=1 00:21:56.141 --rc genhtml_legend=1 00:21:56.141 --rc geninfo_all_blocks=1 00:21:56.141 --rc geninfo_unexecuted_blocks=1 00:21:56.141 00:21:56.141 ' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:56.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.141 --rc genhtml_branch_coverage=1 00:21:56.141 --rc genhtml_function_coverage=1 00:21:56.141 --rc genhtml_legend=1 00:21:56.141 --rc geninfo_all_blocks=1 00:21:56.141 --rc geninfo_unexecuted_blocks=1 00:21:56.141 00:21:56.141 ' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:56.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.141 --rc genhtml_branch_coverage=1 00:21:56.141 --rc genhtml_function_coverage=1 00:21:56.141 --rc genhtml_legend=1 00:21:56.141 --rc geninfo_all_blocks=1 00:21:56.141 --rc geninfo_unexecuted_blocks=1 00:21:56.141 00:21:56.141 ' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:56.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.141 --rc genhtml_branch_coverage=1 00:21:56.141 --rc genhtml_function_coverage=1 00:21:56.141 --rc genhtml_legend=1 00:21:56.141 --rc geninfo_all_blocks=1 00:21:56.141 --rc geninfo_unexecuted_blocks=1 00:21:56.141 00:21:56.141 ' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.141 15:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:56.141 ************************************ 00:21:56.141 START TEST nvmf_shutdown_tc1 00:21:56.141 ************************************ 00:21:56.141 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:56.141 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:56.141 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:56.141 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:56.141 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.141 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:56.142 15:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.708 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.709 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.709 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.709 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.709 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:22:02.709 00:22:02.709 --- 10.0.0.2 ping statistics --- 00:22:02.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.709 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:22:02.709 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:22:02.710 00:22:02.710 --- 10.0.0.1 ping statistics --- 00:22:02.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.710 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.710 15:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3066237 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3066237 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3066237 ']' 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 [2024-12-06 15:39:08.088574] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:02.710 [2024-12-06 15:39:08.088627] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.710 [2024-12-06 15:39:08.169338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.710 [2024-12-06 15:39:08.211814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.710 [2024-12-06 15:39:08.211850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.710 [2024-12-06 15:39:08.211858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.710 [2024-12-06 15:39:08.211865] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.710 [2024-12-06 15:39:08.211871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.710 [2024-12-06 15:39:08.213489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.710 [2024-12-06 15:39:08.213526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.710 [2024-12-06 15:39:08.213636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.710 [2024-12-06 15:39:08.213637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 [2024-12-06 15:39:08.364086] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.710 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 Malloc1 00:22:02.710 [2024-12-06 15:39:08.471424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.710 Malloc2 00:22:02.710 Malloc3 00:22:02.710 Malloc4 00:22:02.710 Malloc5 00:22:02.710 Malloc6 00:22:02.710 Malloc7 00:22:02.969 Malloc8 00:22:02.969 Malloc9 00:22:02.969 Malloc10 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3066634 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3066634 /var/tmp/bdevperf.sock 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3066634 ']' 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.969 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.969 { 00:22:02.969 "params": { 00:22:02.969 "name": "Nvme$subsystem", 00:22:02.969 "trtype": "$TEST_TRANSPORT", 00:22:02.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 [2024-12-06 15:39:08.944652] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:02.970 [2024-12-06 15:39:08.944702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.970 { 00:22:02.970 "params": { 00:22:02.970 "name": "Nvme$subsystem", 00:22:02.970 "trtype": "$TEST_TRANSPORT", 00:22:02.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.970 "adrfam": "ipv4", 00:22:02.970 "trsvcid": "$NVMF_PORT", 00:22:02.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.970 "hdgst": ${hdgst:-false}, 00:22:02.970 "ddgst": ${ddgst:-false} 00:22:02.970 }, 00:22:02.970 "method": "bdev_nvme_attach_controller" 00:22:02.970 } 00:22:02.970 EOF 00:22:02.970 )") 00:22:02.970 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.236 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:03.236 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:03.236 15:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme1", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme2", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme3", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme4", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme5", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme6", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme7", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme8", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme9", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 },{ 00:22:03.236 "params": { 00:22:03.236 "name": "Nvme10", 00:22:03.236 "trtype": "tcp", 00:22:03.236 "traddr": "10.0.0.2", 00:22:03.236 "adrfam": "ipv4", 00:22:03.236 "trsvcid": "4420", 00:22:03.236 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:03.236 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:03.236 "hdgst": false, 00:22:03.236 "ddgst": false 00:22:03.236 }, 00:22:03.236 "method": "bdev_nvme_attach_controller" 00:22:03.236 }' 00:22:03.236 [2024-12-06 15:39:09.022339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.236 [2024-12-06 15:39:09.063816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3066634 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:05.144 15:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:06.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3066634 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:06.101 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3066237 00:22:06.101 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 [2024-12-06 15:39:11.881043] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:06.102 [2024-12-06 15:39:11.881095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067314 ] 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:06.102 { 00:22:06.102 "params": { 00:22:06.102 "name": "Nvme$subsystem", 00:22:06.102 "trtype": "$TEST_TRANSPORT", 00:22:06.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:06.102 "adrfam": "ipv4", 00:22:06.102 "trsvcid": "$NVMF_PORT", 00:22:06.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:06.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:06.102 "hdgst": ${hdgst:-false}, 00:22:06.102 "ddgst": ${ddgst:-false} 00:22:06.102 }, 00:22:06.102 "method": "bdev_nvme_attach_controller" 00:22:06.102 } 00:22:06.102 EOF 00:22:06.102 )") 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:06.102 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:06.103 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:06.103 15:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme1", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme2", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme3", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme4", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme5", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme6", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme7", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme8", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme9", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 },{ 00:22:06.103 "params": { 00:22:06.103 "name": "Nvme10", 00:22:06.103 "trtype": "tcp", 00:22:06.103 "traddr": "10.0.0.2", 00:22:06.103 "adrfam": "ipv4", 00:22:06.103 "trsvcid": "4420", 00:22:06.103 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:06.103 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:06.103 "hdgst": false, 00:22:06.103 "ddgst": false 00:22:06.103 }, 00:22:06.103 "method": "bdev_nvme_attach_controller" 00:22:06.103 }' 00:22:06.103 [2024-12-06 15:39:11.956092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.103 [2024-12-06 15:39:11.996914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.483 Running I/O for 1 seconds... 00:22:08.863 2257.00 IOPS, 141.06 MiB/s 00:22:08.863 Latency(us) 00:22:08.863 [2024-12-06T14:39:14.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.863 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme1n1 : 1.06 241.66 15.10 0.00 0.00 262254.93 16477.62 228689.43 00:22:08.863 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme2n1 : 1.14 279.76 17.49 0.00 0.00 221419.62 16602.45 212711.13 00:22:08.863 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme3n1 : 1.12 290.01 18.13 0.00 0.00 205657.63 24217.11 204721.98 00:22:08.863 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme4n1 : 1.13 293.33 18.33 0.00 0.00 204111.41 7770.70 211712.49 00:22:08.863 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme5n1 : 1.15 283.07 17.69 0.00 0.00 211350.70 2246.95 217704.35 00:22:08.863 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme6n1 : 1.15 277.48 17.34 0.00 0.00 212965.47 16227.96 230686.72 00:22:08.863 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme7n1 : 1.14 281.33 17.58 0.00 0.00 206684.16 16477.62 211712.49 00:22:08.863 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme8n1 : 1.16 276.88 17.30 0.00 0.00 207261.11 13044.78 223696.21 00:22:08.863 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.863 Verification LBA range: start 0x0 length 0x400 00:22:08.863 Nvme9n1 : 1.16 275.94 17.25 0.00 0.00 205089.06 17975.59 237677.23 00:22:08.863 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.864 Verification LBA range: start 0x0 length 0x400 00:22:08.864 Nvme10n1 : 1.16 275.49 17.22 0.00 0.00 202307.68 16602.45 222697.57 00:22:08.864 [2024-12-06T14:39:14.862Z] =================================================================================================================== 00:22:08.864 [2024-12-06T14:39:14.862Z] Total : 2774.94 173.43 0.00 0.00 212876.07 2246.95 237677.23 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.864 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.864 rmmod nvme_tcp 00:22:08.864 rmmod nvme_fabrics 00:22:09.123 rmmod nvme_keyring 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3066237 ']' 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3066237 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3066237 ']' 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3066237 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3066237 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3066237' 00:22:09.123 killing process with pid 3066237 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3066237 00:22:09.123 15:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3066237 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.381 15:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.918 00:22:11.918 real 0m15.376s 00:22:11.918 user 0m34.250s 00:22:11.918 sys 0m5.899s 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.918 ************************************ 00:22:11.918 END TEST nvmf_shutdown_tc1 00:22:11.918 ************************************ 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:11.918 ************************************ 00:22:11.918 START TEST nvmf_shutdown_tc2 00:22:11.918 ************************************ 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.918 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:11.919 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:11.919 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:11.919 Found net devices under 0000:86:00.0: cvl_0_0 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:11.919 Found net devices under 0000:86:00.1: cvl_0_1 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.919 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:22:11.920 00:22:11.920 --- 10.0.0.2 ping statistics --- 00:22:11.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.920 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:11.920 00:22:11.920 --- 10.0.0.1 ping statistics --- 00:22:11.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.920 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3068357 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3068357 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3068357 ']' 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.920 15:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.920 [2024-12-06 15:39:17.838974] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:11.920 [2024-12-06 15:39:17.839027] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.180 [2024-12-06 15:39:17.921712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.180 [2024-12-06 15:39:17.963840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.180 [2024-12-06 15:39:17.963878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.180 [2024-12-06 15:39:17.963885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.180 [2024-12-06 15:39:17.963892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.180 [2024-12-06 15:39:17.963897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.180 [2024-12-06 15:39:17.965489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.180 [2024-12-06 15:39:17.965597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.180 [2024-12-06 15:39:17.965619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:12.180 [2024-12-06 15:39:17.965620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.180 [2024-12-06 15:39:18.104027] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.180 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.439 Malloc1 00:22:12.439 [2024-12-06 15:39:18.222658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.439 Malloc2 00:22:12.439 Malloc3 00:22:12.439 Malloc4 00:22:12.439 Malloc5 00:22:12.439 Malloc6 00:22:12.698 Malloc7 00:22:12.698 Malloc8 00:22:12.698 Malloc9 00:22:12.698 Malloc10 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3068632 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3068632 /var/tmp/bdevperf.sock 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3068632 ']' 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:12.698 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.699 { 00:22:12.699 "params": { 00:22:12.699 "name": "Nvme$subsystem", 00:22:12.699 "trtype": "$TEST_TRANSPORT", 00:22:12.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.699 "adrfam": "ipv4", 00:22:12.699 "trsvcid": "$NVMF_PORT", 00:22:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.699 "hdgst": ${hdgst:-false}, 00:22:12.699 "ddgst": ${ddgst:-false} 00:22:12.699 }, 00:22:12.699 "method": "bdev_nvme_attach_controller" 00:22:12.699 } 00:22:12.699 EOF 00:22:12.699 )") 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.699 { 00:22:12.699 "params": { 00:22:12.699 "name": "Nvme$subsystem", 00:22:12.699 "trtype": "$TEST_TRANSPORT", 00:22:12.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.699 "adrfam": "ipv4", 00:22:12.699 "trsvcid": "$NVMF_PORT", 00:22:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.699 "hdgst": ${hdgst:-false}, 00:22:12.699 "ddgst": ${ddgst:-false} 00:22:12.699 }, 00:22:12.699 "method": "bdev_nvme_attach_controller" 00:22:12.699 } 00:22:12.699 EOF 00:22:12.699 )") 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.699 { 00:22:12.699 "params": { 00:22:12.699 "name": "Nvme$subsystem", 00:22:12.699 "trtype": "$TEST_TRANSPORT", 00:22:12.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.699 "adrfam": "ipv4", 00:22:12.699 "trsvcid": "$NVMF_PORT", 00:22:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.699 "hdgst": ${hdgst:-false}, 00:22:12.699 "ddgst": ${ddgst:-false} 00:22:12.699 }, 00:22:12.699 "method": "bdev_nvme_attach_controller" 00:22:12.699 } 00:22:12.699 EOF 00:22:12.699 )") 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.699 { 00:22:12.699 "params": { 00:22:12.699 "name": "Nvme$subsystem", 00:22:12.699 "trtype": "$TEST_TRANSPORT", 00:22:12.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.699 "adrfam": "ipv4", 00:22:12.699 "trsvcid": "$NVMF_PORT", 00:22:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.699 "hdgst": ${hdgst:-false}, 00:22:12.699 "ddgst": ${ddgst:-false} 00:22:12.699 }, 00:22:12.699 "method": "bdev_nvme_attach_controller" 00:22:12.699 } 00:22:12.699 EOF 00:22:12.699 )") 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.699 { 00:22:12.699 "params": { 00:22:12.699 "name": "Nvme$subsystem", 00:22:12.699 "trtype": "$TEST_TRANSPORT", 00:22:12.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.699 "adrfam": "ipv4", 00:22:12.699 "trsvcid": "$NVMF_PORT", 00:22:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.699 "hdgst": ${hdgst:-false}, 00:22:12.699 "ddgst": ${ddgst:-false} 00:22:12.699 }, 00:22:12.699 "method": "bdev_nvme_attach_controller" 00:22:12.699 } 00:22:12.699 EOF 00:22:12.699 )") 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.699 { 00:22:12.699 "params": { 00:22:12.699 "name": "Nvme$subsystem", 00:22:12.699 "trtype": "$TEST_TRANSPORT", 00:22:12.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.699 "adrfam": "ipv4", 00:22:12.699 "trsvcid": "$NVMF_PORT", 00:22:12.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.699 "hdgst": ${hdgst:-false}, 00:22:12.699 "ddgst": ${ddgst:-false} 00:22:12.699 }, 00:22:12.699 "method": "bdev_nvme_attach_controller" 00:22:12.699 } 00:22:12.699 EOF 00:22:12.699 )") 00:22:12.699 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.959 { 00:22:12.959 "params": { 00:22:12.959 "name": "Nvme$subsystem", 00:22:12.959 "trtype": "$TEST_TRANSPORT", 00:22:12.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.959 "adrfam": "ipv4", 00:22:12.959 "trsvcid": "$NVMF_PORT", 00:22:12.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.959 "hdgst": ${hdgst:-false}, 00:22:12.959 "ddgst": ${ddgst:-false} 00:22:12.959 }, 00:22:12.959 "method": "bdev_nvme_attach_controller" 00:22:12.959 } 00:22:12.959 EOF 00:22:12.959 )") 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.959 [2024-12-06 15:39:18.700418] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:12.959 [2024-12-06 15:39:18.700468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068632 ] 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.959 { 00:22:12.959 "params": { 00:22:12.959 "name": "Nvme$subsystem", 00:22:12.959 "trtype": "$TEST_TRANSPORT", 00:22:12.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.959 "adrfam": "ipv4", 00:22:12.959 "trsvcid": "$NVMF_PORT", 00:22:12.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.959 "hdgst": ${hdgst:-false}, 00:22:12.959 "ddgst": ${ddgst:-false} 00:22:12.959 }, 00:22:12.959 "method": "bdev_nvme_attach_controller" 00:22:12.959 } 00:22:12.959 EOF 00:22:12.959 )") 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.959 { 00:22:12.959 "params": { 00:22:12.959 "name": "Nvme$subsystem", 00:22:12.959 "trtype": "$TEST_TRANSPORT", 00:22:12.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.959 "adrfam": "ipv4", 00:22:12.959 "trsvcid": "$NVMF_PORT", 00:22:12.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.959 "hdgst": ${hdgst:-false}, 00:22:12.959 "ddgst": ${ddgst:-false} 00:22:12.959 }, 00:22:12.959 "method": "bdev_nvme_attach_controller" 00:22:12.959 } 00:22:12.959 EOF 00:22:12.959 )") 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.959 { 00:22:12.959 "params": { 00:22:12.959 "name": "Nvme$subsystem", 00:22:12.959 "trtype": "$TEST_TRANSPORT", 00:22:12.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.959 "adrfam": "ipv4", 00:22:12.959 "trsvcid": "$NVMF_PORT", 00:22:12.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.959 "hdgst": ${hdgst:-false}, 00:22:12.959 "ddgst": ${ddgst:-false} 00:22:12.959 }, 00:22:12.959 "method": "bdev_nvme_attach_controller" 00:22:12.959 } 00:22:12.959 EOF 00:22:12.959 )") 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:12.959 15:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:12.959 "params": { 00:22:12.959 "name": "Nvme1", 00:22:12.959 "trtype": "tcp", 00:22:12.959 "traddr": "10.0.0.2", 00:22:12.959 "adrfam": "ipv4", 00:22:12.959 "trsvcid": "4420", 00:22:12.959 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.959 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme2", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme3", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme4", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme5", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme6", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme7", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme8", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme9", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 },{ 00:22:12.960 "params": { 00:22:12.960 "name": "Nvme10", 00:22:12.960 "trtype": "tcp", 00:22:12.960 "traddr": "10.0.0.2", 00:22:12.960 "adrfam": "ipv4", 00:22:12.960 "trsvcid": "4420", 00:22:12.960 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.960 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.960 "hdgst": false, 00:22:12.960 "ddgst": false 00:22:12.960 }, 00:22:12.960 "method": "bdev_nvme_attach_controller" 00:22:12.960 }' 00:22:12.960 [2024-12-06 15:39:18.779169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.960 [2024-12-06 15:39:18.819753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.867 Running I/O for 10 seconds... 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.867 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.126 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.126 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.126 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:15.126 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:15.126 15:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=147 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 147 -ge 100 ']' 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3068632 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3068632 ']' 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3068632 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3068632 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3068632' 00:22:15.386 killing process with pid 3068632 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3068632 00:22:15.386 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3068632 00:22:15.386 Received shutdown signal, test time was about 0.938243 seconds 00:22:15.386 00:22:15.386 Latency(us) 00:22:15.386 [2024-12-06T14:39:21.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.386 Verification LBA range: start 0x0 length 0x400 00:22:15.386 Nvme1n1 : 0.92 278.49 17.41 0.00 0.00 227189.52 15978.30 196732.83 00:22:15.386 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.386 Verification LBA range: start 0x0 length 0x400 00:22:15.386 Nvme2n1 : 0.92 277.89 17.37 0.00 0.00 224092.04 14854.83 233682.65 00:22:15.386 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.386 Verification LBA range: start 0x0 length 0x400 00:22:15.386 Nvme3n1 : 0.93 274.80 17.18 0.00 0.00 222807.04 12919.95 219701.64 00:22:15.386 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.386 Verification LBA range: start 0x0 length 0x400 00:22:15.386 Nvme4n1 : 0.92 302.78 18.92 0.00 0.00 195125.75 14105.84 223696.21 00:22:15.386 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.386 Verification LBA range: start 0x0 length 0x400 00:22:15.386 Nvme5n1 : 0.92 279.65 17.48 0.00 0.00 210497.95 15666.22 244667.73 00:22:15.386 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.386 Verification LBA range: start 0x0 length 0x400 00:22:15.386 Nvme6n1 : 0.91 282.36 17.65 0.00 0.00 204685.90 18225.25 191739.61 00:22:15.386 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.387 Verification LBA range: start 0x0 length 0x400 00:22:15.387 Nvme7n1 : 0.91 281.88 17.62 0.00 0.00 200580.88 20846.69 211712.49 00:22:15.387 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.387 Verification LBA range: start 0x0 length 0x400 00:22:15.387 Nvme8n1 : 0.93 276.30 17.27 0.00 0.00 202123.70 14230.67 204721.98 00:22:15.387 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.387 Verification LBA range: start 0x0 length 0x400 00:22:15.387 Nvme9n1 : 0.94 273.24 17.08 0.00 0.00 201038.99 18474.91 233682.65 00:22:15.387 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.387 Verification LBA range: start 0x0 length 0x400 00:22:15.387 Nvme10n1 : 0.94 273.04 17.06 0.00 0.00 197043.69 14979.66 215707.06 00:22:15.387 [2024-12-06T14:39:21.385Z] =================================================================================================================== 00:22:15.387 [2024-12-06T14:39:21.385Z] Total : 2800.42 175.03 0.00 0.00 208394.15 12919.95 244667.73 00:22:15.646 15:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3068357 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.581 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.581 rmmod nvme_tcp 00:22:16.581 rmmod nvme_fabrics 00:22:16.840 rmmod nvme_keyring 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3068357 ']' 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3068357 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3068357 ']' 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3068357 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3068357 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3068357' 00:22:16.840 killing process with pid 3068357 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3068357 00:22:16.840 15:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3068357 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.099 15:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.683 00:22:19.683 real 0m7.628s 00:22:19.683 user 0m22.798s 00:22:19.683 sys 0m1.448s 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.683 ************************************ 00:22:19.683 END TEST nvmf_shutdown_tc2 00:22:19.683 ************************************ 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.683 ************************************ 00:22:19.683 START TEST nvmf_shutdown_tc3 00:22:19.683 ************************************ 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.683 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.683 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.683 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:19.683 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:19.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:22:19.684 00:22:19.684 --- 10.0.0.2 ping statistics --- 00:22:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.684 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:19.684 00:22:19.684 --- 10.0.0.1 ping statistics --- 00:22:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.684 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3069900 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3069900 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3069900 ']' 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.684 15:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.684 [2024-12-06 15:39:25.570042] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:19.684 [2024-12-06 15:39:25.570089] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.684 [2024-12-06 15:39:25.647000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.942 [2024-12-06 15:39:25.689892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.942 [2024-12-06 15:39:25.689929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.942 [2024-12-06 15:39:25.689936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.942 [2024-12-06 15:39:25.689941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.942 [2024-12-06 15:39:25.689947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.942 [2024-12-06 15:39:25.691597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.942 [2024-12-06 15:39:25.691685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.942 [2024-12-06 15:39:25.691767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.942 [2024-12-06 15:39:25.691767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.520 [2024-12-06 15:39:26.431640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.520 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.520 Malloc1 00:22:20.777 [2024-12-06 15:39:26.534957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.777 Malloc2 00:22:20.777 Malloc3 00:22:20.777 Malloc4 00:22:20.777 Malloc5 00:22:20.777 Malloc6 00:22:20.777 Malloc7 00:22:21.035 Malloc8 00:22:21.035 Malloc9 00:22:21.035 Malloc10 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3070182 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3070182 /var/tmp/bdevperf.sock 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3070182 ']' 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.035 { 00:22:21.035 "params": { 00:22:21.035 "name": "Nvme$subsystem", 00:22:21.035 "trtype": "$TEST_TRANSPORT", 00:22:21.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.035 "adrfam": "ipv4", 00:22:21.035 "trsvcid": "$NVMF_PORT", 00:22:21.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.035 "hdgst": ${hdgst:-false}, 00:22:21.035 "ddgst": ${ddgst:-false} 00:22:21.035 }, 00:22:21.035 "method": "bdev_nvme_attach_controller" 00:22:21.035 } 00:22:21.035 EOF 00:22:21.035 )") 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.035 { 00:22:21.035 "params": { 00:22:21.035 "name": "Nvme$subsystem", 00:22:21.035 "trtype": "$TEST_TRANSPORT", 00:22:21.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.035 "adrfam": "ipv4", 00:22:21.035 "trsvcid": "$NVMF_PORT", 00:22:21.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.035 "hdgst": ${hdgst:-false}, 00:22:21.035 "ddgst": ${ddgst:-false} 00:22:21.035 }, 00:22:21.035 "method": "bdev_nvme_attach_controller" 00:22:21.035 } 00:22:21.035 EOF 00:22:21.035 )") 00:22:21.035 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.036 { 00:22:21.036 "params": { 00:22:21.036 "name": "Nvme$subsystem", 00:22:21.036 "trtype": "$TEST_TRANSPORT", 00:22:21.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.036 "adrfam": "ipv4", 00:22:21.036 "trsvcid": "$NVMF_PORT", 00:22:21.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.036 "hdgst": ${hdgst:-false}, 00:22:21.036 "ddgst": ${ddgst:-false} 00:22:21.036 }, 00:22:21.036 "method": "bdev_nvme_attach_controller" 00:22:21.036 } 00:22:21.036 EOF 00:22:21.036 )") 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.036 { 00:22:21.036 "params": { 00:22:21.036 "name": "Nvme$subsystem", 00:22:21.036 "trtype": "$TEST_TRANSPORT", 00:22:21.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.036 "adrfam": "ipv4", 00:22:21.036 "trsvcid": "$NVMF_PORT", 00:22:21.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.036 "hdgst": ${hdgst:-false}, 00:22:21.036 "ddgst": ${ddgst:-false} 00:22:21.036 }, 00:22:21.036 "method": "bdev_nvme_attach_controller" 00:22:21.036 } 00:22:21.036 EOF 00:22:21.036 )") 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.036 { 00:22:21.036 "params": { 00:22:21.036 "name": "Nvme$subsystem", 00:22:21.036 "trtype": "$TEST_TRANSPORT", 00:22:21.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.036 "adrfam": "ipv4", 00:22:21.036 "trsvcid": "$NVMF_PORT", 00:22:21.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.036 "hdgst": ${hdgst:-false}, 00:22:21.036 "ddgst": ${ddgst:-false} 00:22:21.036 }, 00:22:21.036 "method": "bdev_nvme_attach_controller" 00:22:21.036 } 00:22:21.036 EOF 00:22:21.036 )") 00:22:21.036 15:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.036 { 00:22:21.036 "params": { 00:22:21.036 "name": "Nvme$subsystem", 00:22:21.036 "trtype": "$TEST_TRANSPORT", 00:22:21.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.036 "adrfam": "ipv4", 00:22:21.036 "trsvcid": "$NVMF_PORT", 00:22:21.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.036 "hdgst": ${hdgst:-false}, 00:22:21.036 "ddgst": ${ddgst:-false} 00:22:21.036 }, 00:22:21.036 "method": "bdev_nvme_attach_controller" 00:22:21.036 } 00:22:21.036 EOF 00:22:21.036 )") 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.036 { 00:22:21.036 "params": { 00:22:21.036 "name": "Nvme$subsystem", 00:22:21.036 "trtype": "$TEST_TRANSPORT", 00:22:21.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.036 "adrfam": "ipv4", 00:22:21.036 "trsvcid": "$NVMF_PORT", 00:22:21.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.036 "hdgst": ${hdgst:-false}, 00:22:21.036 "ddgst": ${ddgst:-false} 00:22:21.036 }, 00:22:21.036 "method": "bdev_nvme_attach_controller" 00:22:21.036 } 00:22:21.036 EOF 00:22:21.036 )") 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.036 [2024-12-06 15:39:27.015490] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:21.036 [2024-12-06 15:39:27.015540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070182 ] 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.036 { 00:22:21.036 "params": { 00:22:21.036 "name": "Nvme$subsystem", 00:22:21.036 "trtype": "$TEST_TRANSPORT", 00:22:21.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.036 "adrfam": "ipv4", 00:22:21.036 "trsvcid": "$NVMF_PORT", 00:22:21.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.036 "hdgst": ${hdgst:-false}, 00:22:21.036 "ddgst": ${ddgst:-false} 00:22:21.036 }, 00:22:21.036 "method": "bdev_nvme_attach_controller" 00:22:21.036 } 00:22:21.036 EOF 00:22:21.036 )") 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.036 { 00:22:21.036 "params": { 00:22:21.036 "name": "Nvme$subsystem", 00:22:21.036 "trtype": "$TEST_TRANSPORT", 00:22:21.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.036 "adrfam": "ipv4", 00:22:21.036 "trsvcid": "$NVMF_PORT", 00:22:21.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.036 "hdgst": ${hdgst:-false}, 00:22:21.036 "ddgst": ${ddgst:-false} 00:22:21.036 }, 00:22:21.036 "method": "bdev_nvme_attach_controller" 00:22:21.036 } 00:22:21.036 EOF 00:22:21.036 )") 00:22:21.036 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.294 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:21.294 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:21.294 { 00:22:21.294 "params": { 00:22:21.294 "name": "Nvme$subsystem", 00:22:21.294 "trtype": "$TEST_TRANSPORT", 00:22:21.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.294 "adrfam": "ipv4", 00:22:21.294 "trsvcid": "$NVMF_PORT", 00:22:21.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.294 "hdgst": ${hdgst:-false}, 00:22:21.294 "ddgst": ${ddgst:-false} 00:22:21.294 }, 00:22:21.294 "method": "bdev_nvme_attach_controller" 00:22:21.294 } 00:22:21.294 EOF 00:22:21.294 )") 00:22:21.294 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:21.294 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:21.294 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:21.294 15:39:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:21.294 "params": { 00:22:21.294 "name": "Nvme1", 00:22:21.294 "trtype": "tcp", 00:22:21.294 "traddr": "10.0.0.2", 00:22:21.294 "adrfam": "ipv4", 00:22:21.294 "trsvcid": "4420", 00:22:21.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme2", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme3", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme4", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme5", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme6", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme7", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme8", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme9", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 },{ 00:22:21.295 "params": { 00:22:21.295 "name": "Nvme10", 00:22:21.295 "trtype": "tcp", 00:22:21.295 "traddr": "10.0.0.2", 00:22:21.295 "adrfam": "ipv4", 00:22:21.295 "trsvcid": "4420", 00:22:21.295 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:21.295 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:21.295 "hdgst": false, 00:22:21.295 "ddgst": false 00:22:21.295 }, 00:22:21.295 "method": "bdev_nvme_attach_controller" 00:22:21.295 }' 00:22:21.295 [2024-12-06 15:39:27.093608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.295 [2024-12-06 15:39:27.134270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.197 Running I/O for 10 seconds... 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:23.197 15:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=88 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 88 -ge 100 ']' 00:22:23.455 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3069900 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3069900 ']' 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3069900 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069900 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069900' 00:22:23.733 killing process with pid 3069900 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3069900 00:22:23.733 15:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3069900 00:22:23.733 [2024-12-06 15:39:29.613696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.613997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.733 [2024-12-06 15:39:29.614079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.614163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2ac0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.616513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb2fb0 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.734 [2024-12-06 15:39:29.617754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.617997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.618090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3480 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.735 [2024-12-06 15:39:29.619361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.619569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3970 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.736 [2024-12-06 15:39:29.620598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.620604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.620610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.620616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.620621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb3e40 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.622407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4800 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.623202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.623217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.623224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.623230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.623236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.737 [2024-12-06 15:39:29.623242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.623592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb4cd0 is same with the state(6) to be set 00:22:23.738 [2024-12-06 15:39:29.638557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.738 [2024-12-06 15:39:29.638807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.738 [2024-12-06 15:39:29.638814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.638986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.638992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.739 [2024-12-06 15:39:29.639401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.739 [2024-12-06 15:39:29.639409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.740 [2024-12-06 15:39:29.639562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:23.740 [2024-12-06 15:39:29.639729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d18a0 is same with the state(6) to be set 00:22:23.740 [2024-12-06 15:39:29.639816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2717830 is same with the state(6) to be set 00:22:23.740 [2024-12-06 15:39:29.639898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bb610 is same with the state(6) to be set 00:22:23.740 [2024-12-06 15:39:29.639979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.639987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.639995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d16a0 is same with the state(6) to be set 00:22:23.740 [2024-12-06 15:39:29.640060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae10 is same with the state(6) to be set 00:22:23.740 [2024-12-06 15:39:29.640143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6940 is same with the state(6) to be set 00:22:23.740 [2024-12-06 15:39:29.640221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.740 [2024-12-06 15:39:29.640264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.740 [2024-12-06 15:39:29.640270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d2c80 is same with the state(6) to be set 00:22:23.741 [2024-12-06 15:39:29.640301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd550 is same with the state(6) to be set 00:22:23.741 [2024-12-06 15:39:29.640389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271d760 is same with the state(6) to be set 00:22:23.741 [2024-12-06 15:39:29.640465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.741 [2024-12-06 15:39:29.640516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6dd0 is same with the state(6) to be set 00:22:23.741 [2024-12-06 15:39:29.640865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.640887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.640908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.640923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.640939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.640955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.640970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.640985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.640993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.741 [2024-12-06 15:39:29.641190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.741 [2024-12-06 15:39:29.641196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.742 [2024-12-06 15:39:29.641811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.742 [2024-12-06 15:39:29.641818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.641827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.641835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.641841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.641849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.641856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.641863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a7790 is same with the state(6) to be set 00:22:23.743 [2024-12-06 15:39:29.642060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.642392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.642398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.743 [2024-12-06 15:39:29.648451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.743 [2024-12-06 15:39:29.648459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.744 [2024-12-06 15:39:29.648848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.744 [2024-12-06 15:39:29.648855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ab070 is same with the state(6) to be set 00:22:23.744 [2024-12-06 15:39:29.649862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d18a0 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2717830 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bb610 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d16a0 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ae10 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a6940 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d2c80 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd550 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.649988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x271d760 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.650001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a6dd0 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.652129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:23.744 [2024-12-06 15:39:29.652752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:23.744 [2024-12-06 15:39:29.652785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:23.744 [2024-12-06 15:39:29.652916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.744 [2024-12-06 15:39:29.652931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x271d760 with addr=10.0.0.2, port=4420 00:22:23.744 [2024-12-06 15:39:29.652940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271d760 is same with the state(6) to be set 00:22:23.744 [2024-12-06 15:39:29.653896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.744 [2024-12-06 15:39:29.653924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d16a0 with addr=10.0.0.2, port=4420 00:22:23.744 [2024-12-06 15:39:29.653932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d16a0 is same with the state(6) to be set 00:22:23.744 [2024-12-06 15:39:29.654019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.744 [2024-12-06 15:39:29.654030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26fd550 with addr=10.0.0.2, port=4420 00:22:23.744 [2024-12-06 15:39:29.654037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd550 is same with the state(6) to be set 00:22:23.744 [2024-12-06 15:39:29.654048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x271d760 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.654192] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.744 [2024-12-06 15:39:29.654241] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.744 [2024-12-06 15:39:29.654285] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.744 [2024-12-06 15:39:29.654328] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.744 [2024-12-06 15:39:29.654435] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.744 [2024-12-06 15:39:29.654481] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.744 [2024-12-06 15:39:29.654515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d16a0 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.654528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd550 (9): Bad file descriptor 00:22:23.744 [2024-12-06 15:39:29.654537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:23.744 [2024-12-06 15:39:29.654544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:23.744 [2024-12-06 15:39:29.654553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:23.744 [2024-12-06 15:39:29.654562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:23.745 [2024-12-06 15:39:29.654598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.654992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.654999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.745 [2024-12-06 15:39:29.655190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.745 [2024-12-06 15:39:29.655198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.655549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.655556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27d2440 is same with the state(6) to be set 00:22:23.746 [2024-12-06 15:39:29.655718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:23.746 [2024-12-06 15:39:29.655728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:23.746 [2024-12-06 15:39:29.655736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:23.746 [2024-12-06 15:39:29.655742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:23.746 [2024-12-06 15:39:29.655750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:23.746 [2024-12-06 15:39:29.655755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:23.746 [2024-12-06 15:39:29.655762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:23.746 [2024-12-06 15:39:29.655767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:23.746 [2024-12-06 15:39:29.656701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.656712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.656724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.656731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.656747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.656755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.656762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.656770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.656777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.656785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.656792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.656800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.746 [2024-12-06 15:39:29.656806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.746 [2024-12-06 15:39:29.656814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.656988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.656996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.747 [2024-12-06 15:39:29.657422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.747 [2024-12-06 15:39:29.657430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.657683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.657690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a6480 is same with the state(6) to be set 00:22:23.748 [2024-12-06 15:39:29.657761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:23.748 [2024-12-06 15:39:29.658710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:23.748 [2024-12-06 15:39:29.658812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.748 [2024-12-06 15:39:29.658825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a6dd0 with addr=10.0.0.2, port=4420 00:22:23.748 [2024-12-06 15:39:29.658834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6dd0 is same with the state(6) to be set 00:22:23.748 [2024-12-06 15:39:29.659261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.748 [2024-12-06 15:39:29.659273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d18a0 with addr=10.0.0.2, port=4420 00:22:23.748 [2024-12-06 15:39:29.659281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d18a0 is same with the state(6) to be set 00:22:23.748 [2024-12-06 15:39:29.659291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a6dd0 (9): Bad file descriptor 00:22:23.748 [2024-12-06 15:39:29.659552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d18a0 (9): Bad file descriptor 00:22:23.748 [2024-12-06 15:39:29.659563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:23.748 [2024-12-06 15:39:29.659569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:23.748 [2024-12-06 15:39:29.659576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:23.748 [2024-12-06 15:39:29.659583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:23.748 [2024-12-06 15:39:29.659623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:23.748 [2024-12-06 15:39:29.659630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:23.748 [2024-12-06 15:39:29.659636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:23.748 [2024-12-06 15:39:29.659643] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:23.748 [2024-12-06 15:39:29.659962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.659972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.659984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.659991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.659999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.748 [2024-12-06 15:39:29.660181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.748 [2024-12-06 15:39:29.660187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.749 [2024-12-06 15:39:29.660785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.749 [2024-12-06 15:39:29.660794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.660917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.660924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25eb2e0 is same with the state(6) to be set 00:22:23.750 [2024-12-06 15:39:29.661899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.661909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.661920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.661929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.661938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.661944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.661953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.661959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.661968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.661974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.661982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.661989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.661998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.750 [2024-12-06 15:39:29.662286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.750 [2024-12-06 15:39:29.662294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.662856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.662864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26980d0 is same with the state(6) to be set 00:22:23.751 [2024-12-06 15:39:29.663845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.663856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.663867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.663873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.663882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.663889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.751 [2024-12-06 15:39:29.663898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.751 [2024-12-06 15:39:29.663905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.663913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.663920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.663928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.663935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.663944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.663950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.663959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.663965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.663973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.663980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.663988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.663995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.752 [2024-12-06 15:39:29.664518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.752 [2024-12-06 15:39:29.664525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.664806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.664813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a5190 is same with the state(6) to be set 00:22:23.753 [2024-12-06 15:39:29.665787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.665987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.665993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.666001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.666008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.666017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.666023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.666032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.666038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.666046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.666052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.666061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.666068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.753 [2024-12-06 15:39:29.666076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.753 [2024-12-06 15:39:29.666083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.754 [2024-12-06 15:39:29.666662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.754 [2024-12-06 15:39:29.666670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.666677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.666685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.666691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.666700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.666707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.666716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.666722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.666731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.666746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.666752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.666759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a8aa0 is same with the state(6) to be set 00:22:23.755 [2024-12-06 15:39:29.667744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.667986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.667994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.755 [2024-12-06 15:39:29.668231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.755 [2024-12-06 15:39:29.668237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.756 [2024-12-06 15:39:29.668710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.756 [2024-12-06 15:39:29.668717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26a9db0 is same with the state(6) to be set 00:22:23.756 [2024-12-06 15:39:29.669670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:23.756 [2024-12-06 15:39:29.669687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:23.756 [2024-12-06 15:39:29.669698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:23.756 [2024-12-06 15:39:29.669710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:23.756 task offset: 26752 on job bdev=Nvme10n1 fails 00:22:23.756 00:22:23.756 Latency(us) 00:22:23.756 [2024-12-06T14:39:29.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.756 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.756 Job: Nvme1n1 ended in about 0.91 seconds with error 00:22:23.756 Verification LBA range: start 0x0 length 0x400 00:22:23.756 Nvme1n1 : 0.91 215.28 13.45 70.29 0.00 221715.85 14542.75 219701.64 00:22:23.756 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.756 Job: Nvme2n1 ended in about 0.92 seconds with error 00:22:23.756 Verification LBA range: start 0x0 length 0x400 00:22:23.756 Nvme2n1 : 0.92 214.07 13.38 69.90 0.00 219148.30 16976.94 213709.78 00:22:23.756 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.756 Job: Nvme3n1 ended in about 0.92 seconds with error 00:22:23.756 Verification LBA range: start 0x0 length 0x400 00:22:23.756 Nvme3n1 : 0.92 209.26 13.08 69.75 0.00 219268.14 18849.40 227690.79 00:22:23.756 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.756 Job: Nvme4n1 ended in about 0.92 seconds with error 00:22:23.756 Verification LBA range: start 0x0 length 0x400 00:22:23.756 Nvme4n1 : 0.92 208.82 13.05 69.61 0.00 215835.55 14043.43 210713.84 00:22:23.756 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.756 Job: Nvme5n1 ended in about 0.91 seconds with error 00:22:23.756 Verification LBA range: start 0x0 length 0x400 00:22:23.756 Nvme5n1 : 0.91 210.42 13.15 70.14 0.00 210208.67 16352.79 220700.28 00:22:23.756 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.756 Job: Nvme6n1 ended in about 0.90 seconds with error 00:22:23.757 Verification LBA range: start 0x0 length 0x400 00:22:23.757 Nvme6n1 : 0.90 212.16 13.26 70.72 0.00 204458.67 28960.67 227690.79 00:22:23.757 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.757 Job: Nvme7n1 ended in about 0.92 seconds with error 00:22:23.757 Verification LBA range: start 0x0 length 0x400 00:22:23.757 Nvme7n1 : 0.92 212.72 13.29 69.46 0.00 201620.66 16852.11 205720.62 00:22:23.757 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.757 Job: Nvme8n1 ended in about 0.92 seconds with error 00:22:23.757 Verification LBA range: start 0x0 length 0x400 00:22:23.757 Nvme8n1 : 0.92 212.27 13.27 69.31 0.00 198260.56 7989.15 213709.78 00:22:23.757 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.757 Job: Nvme9n1 ended in about 0.91 seconds with error 00:22:23.757 Verification LBA range: start 0x0 length 0x400 00:22:23.757 Nvme9n1 : 0.91 211.94 13.25 70.65 0.00 193141.88 13232.03 243669.09 00:22:23.757 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.757 Job: Nvme10n1 ended in about 0.90 seconds with error 00:22:23.757 Verification LBA range: start 0x0 length 0x400 00:22:23.757 Nvme10n1 : 0.90 212.48 13.28 70.83 0.00 188697.72 10610.59 219701.64 00:22:23.757 [2024-12-06T14:39:29.755Z] =================================================================================================================== 00:22:23.757 [2024-12-06T14:39:29.755Z] Total : 2119.40 132.46 700.66 0.00 207253.93 7989.15 243669.09 00:22:23.757 [2024-12-06 15:39:29.699380] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:23.757 [2024-12-06 15:39:29.699433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:23.757 [2024-12-06 15:39:29.699775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.699793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x229ae10 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.699805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x229ae10 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.699953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.699964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a6940 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.699972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6940 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.700092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.700103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d2c80 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.700110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d2c80 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.700249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.700259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bb610 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.700266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bb610 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.701419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:23.757 [2024-12-06 15:39:29.701437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:23.757 [2024-12-06 15:39:29.701446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:23.757 [2024-12-06 15:39:29.701460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:23.757 [2024-12-06 15:39:29.701468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:23.757 [2024-12-06 15:39:29.701779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.701793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2717830 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.701802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2717830 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.701815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x229ae10 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.701826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a6940 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.701835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d2c80 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.701844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bb610 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.701880] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:23.757 [2024-12-06 15:39:29.701892] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:23.757 [2024-12-06 15:39:29.701902] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:23.757 [2024-12-06 15:39:29.701912] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:23.757 [2024-12-06 15:39:29.702182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.702195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x271d760 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.702203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x271d760 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.702338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.702348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26fd550 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.702355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26fd550 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.702573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.702584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d16a0 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.702591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d16a0 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.702784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.702794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22a6dd0 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.702801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a6dd0 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.702870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.757 [2024-12-06 15:39:29.702880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26d18a0 with addr=10.0.0.2, port=4420 00:22:23.757 [2024-12-06 15:39:29.702887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d18a0 is same with the state(6) to be set 00:22:23.757 [2024-12-06 15:39:29.702897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2717830 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.702908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:23.757 [2024-12-06 15:39:29.702915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:23.757 [2024-12-06 15:39:29.702923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:23.757 [2024-12-06 15:39:29.702932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:23.757 [2024-12-06 15:39:29.702940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:23.757 [2024-12-06 15:39:29.702945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:23.757 [2024-12-06 15:39:29.702952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:23.757 [2024-12-06 15:39:29.702957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:23.757 [2024-12-06 15:39:29.702963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:23.757 [2024-12-06 15:39:29.702969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:23.757 [2024-12-06 15:39:29.702976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:23.757 [2024-12-06 15:39:29.702981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:23.757 [2024-12-06 15:39:29.702988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:23.757 [2024-12-06 15:39:29.702993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:23.757 [2024-12-06 15:39:29.703000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:23.757 [2024-12-06 15:39:29.703005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:23.757 [2024-12-06 15:39:29.703074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x271d760 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.703085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26fd550 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.703093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d16a0 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.703101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a6dd0 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.703110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26d18a0 (9): Bad file descriptor 00:22:23.757 [2024-12-06 15:39:29.703117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:23.757 [2024-12-06 15:39:29.703123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:23.757 [2024-12-06 15:39:29.703129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:23.757 [2024-12-06 15:39:29.703135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:23.757 [2024-12-06 15:39:29.703158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:23.757 [2024-12-06 15:39:29.703165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:23.757 [2024-12-06 15:39:29.703171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:23.757 [2024-12-06 15:39:29.703177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:23.757 [2024-12-06 15:39:29.703186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:23.757 [2024-12-06 15:39:29.703191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:23.757 [2024-12-06 15:39:29.703198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:23.757 [2024-12-06 15:39:29.703204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:23.757 [2024-12-06 15:39:29.703210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:23.758 [2024-12-06 15:39:29.703215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:23.758 [2024-12-06 15:39:29.703221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:23.758 [2024-12-06 15:39:29.703227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:23.758 [2024-12-06 15:39:29.703233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:23.758 [2024-12-06 15:39:29.703239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:23.758 [2024-12-06 15:39:29.703245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:23.758 [2024-12-06 15:39:29.703251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:23.758 [2024-12-06 15:39:29.703257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:23.758 [2024-12-06 15:39:29.703263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:23.758 [2024-12-06 15:39:29.703269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:23.758 [2024-12-06 15:39:29.703275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:24.016 15:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3070182 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3070182 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3070182 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:25.395 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.396 rmmod nvme_tcp 00:22:25.396 rmmod nvme_fabrics 00:22:25.396 rmmod nvme_keyring 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3069900 ']' 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3069900 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3069900 ']' 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3069900 00:22:25.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3069900) - No such process 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3069900 is not found' 00:22:25.396 Process with pid 3069900 is not found 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.396 15:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.303 00:22:27.303 real 0m8.017s 00:22:27.303 user 0m20.084s 00:22:27.303 sys 0m1.385s 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.303 ************************************ 00:22:27.303 END TEST nvmf_shutdown_tc3 00:22:27.303 ************************************ 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:27.303 ************************************ 00:22:27.303 START TEST nvmf_shutdown_tc4 00:22:27.303 ************************************ 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:27.303 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.304 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.304 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.304 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.304 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.304 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:27.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:22:27.564 00:22:27.564 --- 10.0.0.2 ping statistics --- 00:22:27.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.564 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:27.564 00:22:27.564 --- 10.0.0.1 ping statistics --- 00:22:27.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.564 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.564 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3071248 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3071248 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3071248 ']' 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.823 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:27.823 [2024-12-06 15:39:33.634970] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:27.823 [2024-12-06 15:39:33.635024] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.823 [2024-12-06 15:39:33.715192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.823 [2024-12-06 15:39:33.755280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.823 [2024-12-06 15:39:33.755318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.824 [2024-12-06 15:39:33.755325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.824 [2024-12-06 15:39:33.755330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.824 [2024-12-06 15:39:33.755335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.824 [2024-12-06 15:39:33.756940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.824 [2024-12-06 15:39:33.757035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.824 [2024-12-06 15:39:33.757144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.824 [2024-12-06 15:39:33.757145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.082 [2024-12-06 15:39:33.901877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.082 15:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.082 Malloc1 00:22:28.082 [2024-12-06 15:39:34.008705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.082 Malloc2 00:22:28.082 Malloc3 00:22:28.341 Malloc4 00:22:28.341 Malloc5 00:22:28.341 Malloc6 00:22:28.341 Malloc7 00:22:28.341 Malloc8 00:22:28.599 Malloc9 00:22:28.599 Malloc10 00:22:28.599 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.599 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:28.599 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.599 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.599 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3071500 00:22:28.599 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:28.599 15:39:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:28.599 [2024-12-06 15:39:34.522269] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3071248 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3071248 ']' 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3071248 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071248 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071248' 00:22:33.873 killing process with pid 3071248 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3071248 00:22:33.873 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3071248 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 [2024-12-06 15:39:39.519881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 [2024-12-06 15:39:39.520410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03f00 is same with the state(6) to be set 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 [2024-12-06 15:39:39.520448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03f00 is same with the state(6) to be set 00:22:33.873 [2024-12-06 15:39:39.520456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03f00 is same with the state(6) to be set 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 [2024-12-06 15:39:39.520462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03f00 is same with the state(6) to be set 00:22:33.873 [2024-12-06 15:39:39.520470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03f00 is same with the state(6) to be set 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 [2024-12-06 15:39:39.520475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03f00 is same with starting I/O failed: -6 00:22:33.873 the state(6) to be set 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.873 starting I/O failed: -6 00:22:33.873 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.520740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.520958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e043d0 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.520982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e043d0 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.520990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e043d0 is same with the state(6) to be set 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.520997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e043d0 is same with the state(6) to be set 00:22:33.874 [2024-12-06 15:39:39.521004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e043d0 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.521010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e043d0 is same with the state(6) to be set 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.521297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.521319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with the state(6) to be set 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.521327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with the state(6) to be set 00:22:33.874 [2024-12-06 15:39:39.521334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.521341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with the state(6) to be set 00:22:33.874 [2024-12-06 15:39:39.521348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with Write completed with error (sct=0, sc=8) 00:22:33.874 the state(6) to be set 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.521359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with the state(6) to be set 00:22:33.874 [2024-12-06 15:39:39.521376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205aad0 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.521640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03a30 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.521661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03a30 is same with the state(6) to be set 00:22:33.874 [2024-12-06 15:39:39.521670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03a30 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 [2024-12-06 15:39:39.521676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03a30 is same with the state(6) to be set 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.521684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e03a30 is same with the state(6) to be set 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 [2024-12-06 15:39:39.521754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.874 starting I/O failed: -6 00:22:33.874 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 [2024-12-06 15:39:39.523320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.875 NVMe io qpair process completion error 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 [2024-12-06 15:39:39.527005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 [2024-12-06 15:39:39.527890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 [2024-12-06 15:39:39.528167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c660 is same with starting I/O failed: -6 00:22:33.875 the state(6) to be set 00:22:33.875 [2024-12-06 15:39:39.528190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c660 is same with Write completed with error (sct=0, sc=8) 00:22:33.875 the state(6) to be set 00:22:33.875 starting I/O failed: -6 00:22:33.875 [2024-12-06 15:39:39.528203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c660 is same with the state(6) to be set 00:22:33.875 [2024-12-06 15:39:39.528211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c660 is same with the state(6) to be set 00:22:33.875 [2024-12-06 15:39:39.528217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c660 is same with Write completed with error (sct=0, sc=8) 00:22:33.875 the state(6) to be set 00:22:33.875 [2024-12-06 15:39:39.528224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c660 is same with the state(6) to be set 00:22:33.875 [2024-12-06 15:39:39.528230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c660 is same with the state(6) to be set 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.875 Write completed with error (sct=0, sc=8) 00:22:33.875 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 [2024-12-06 15:39:39.528523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.528543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 [2024-12-06 15:39:39.528551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.528558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.528564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 [2024-12-06 15:39:39.528572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.528579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.528585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.528591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with Write completed with error (sct=0, sc=8) 00:22:33.876 the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.528598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205cb30 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.528928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.876 [2024-12-06 15:39:39.528964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.528987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.528994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.529000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.529007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.529013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 [2024-12-06 15:39:39.529019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.529025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.529031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 [2024-12-06 15:39:39.529038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205d000 is same with the state(6) to be set 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.529530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c190 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.529550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c190 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.529558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c190 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 [2024-12-06 15:39:39.529565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c190 is same with the state(6) to be set 00:22:33.876 starting I/O failed: -6 00:22:33.876 [2024-12-06 15:39:39.529571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c190 is same with the state(6) to be set 00:22:33.876 [2024-12-06 15:39:39.529577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c190 is same with the state(6) to be set 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.876 Write completed with error (sct=0, sc=8) 00:22:33.876 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 [2024-12-06 15:39:39.530515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.877 NVMe io qpair process completion error 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 [2024-12-06 15:39:39.531347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 [2024-12-06 15:39:39.532221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.877 Write completed with error (sct=0, sc=8) 00:22:33.877 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 [2024-12-06 15:39:39.533227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 [2024-12-06 15:39:39.535714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.878 NVMe io qpair process completion error 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 [2024-12-06 15:39:39.536661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 Write completed with error (sct=0, sc=8) 00:22:33.878 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 [2024-12-06 15:39:39.537569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 [2024-12-06 15:39:39.538590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.879 Write completed with error (sct=0, sc=8) 00:22:33.879 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 [2024-12-06 15:39:39.540760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.880 NVMe io qpair process completion error 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 [2024-12-06 15:39:39.542040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.880 Write completed with error (sct=0, sc=8) 00:22:33.880 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 [2024-12-06 15:39:39.543772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 [2024-12-06 15:39:39.548363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.881 NVMe io qpair process completion error 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 [2024-12-06 15:39:39.549404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 starting I/O failed: -6 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.881 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 [2024-12-06 15:39:39.550292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 [2024-12-06 15:39:39.551288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.882 Write completed with error (sct=0, sc=8) 00:22:33.882 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 [2024-12-06 15:39:39.553354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.883 NVMe io qpair process completion error 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 [2024-12-06 15:39:39.554363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 [2024-12-06 15:39:39.555219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.883 Write completed with error (sct=0, sc=8) 00:22:33.883 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 [2024-12-06 15:39:39.556258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 [2024-12-06 15:39:39.557872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.884 NVMe io qpair process completion error 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.884 starting I/O failed: -6 00:22:33.884 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 [2024-12-06 15:39:39.558820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 [2024-12-06 15:39:39.559704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.885 starting I/O failed: -6 00:22:33.885 starting I/O failed: -6 00:22:33.885 starting I/O failed: -6 00:22:33.885 starting I/O failed: -6 00:22:33.885 starting I/O failed: -6 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 [2024-12-06 15:39:39.560918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.885 Write completed with error (sct=0, sc=8) 00:22:33.885 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 [2024-12-06 15:39:39.567338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.886 NVMe io qpair process completion error 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 [2024-12-06 15:39:39.568512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 [2024-12-06 15:39:39.569399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.886 Write completed with error (sct=0, sc=8) 00:22:33.886 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 [2024-12-06 15:39:39.570400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 [2024-12-06 15:39:39.575122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.887 NVMe io qpair process completion error 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 starting I/O failed: -6 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.887 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 [2024-12-06 15:39:39.576020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 [2024-12-06 15:39:39.576828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:33.888 starting I/O failed: -6 00:22:33.888 starting I/O failed: -6 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 [2024-12-06 15:39:39.578020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.888 starting I/O failed: -6 00:22:33.888 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 Write completed with error (sct=0, sc=8) 00:22:33.889 starting I/O failed: -6 00:22:33.889 [2024-12-06 15:39:39.580640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:33.889 NVMe io qpair process completion error 00:22:33.889 Initializing NVMe Controllers 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:33.889 Controller IO queue size 128, less than required. 00:22:33.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:33.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:33.889 Initialization complete. Launching workers. 00:22:33.889 ======================================================== 00:22:33.889 Latency(us) 00:22:33.889 Device Information : IOPS MiB/s Average min max 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2211.02 95.00 57898.68 777.24 102357.27 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2160.09 92.82 59281.53 698.73 114417.44 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2204.87 94.74 58097.59 615.59 112799.56 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2209.27 94.93 58030.26 724.49 110657.91 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2213.22 95.10 57942.73 870.38 110218.95 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2201.36 94.59 58266.79 867.93 111839.67 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2210.80 95.00 58093.15 927.37 119034.38 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2204.66 94.73 57589.74 869.33 108022.49 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2227.27 95.70 57670.62 700.99 122146.28 00:22:33.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2163.60 92.97 58679.53 712.86 108139.32 00:22:33.889 ======================================================== 00:22:33.889 Total : 22006.16 945.58 58151.09 615.59 122146.28 00:22:33.889 00:22:33.889 [2024-12-06 15:39:39.583623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe39ae0 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37890 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37bc0 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37ef0 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe38410 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe38a70 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37560 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe39720 is same with the state(6) to be set 00:22:33.889 [2024-12-06 15:39:39.583862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe38740 is same with the state(6) to be set 00:22:33.890 [2024-12-06 15:39:39.583889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe39900 is same with the state(6) to be set 00:22:33.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:34.148 15:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:35.085 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3071500 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3071500 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3071500 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.086 rmmod nvme_tcp 00:22:35.086 rmmod nvme_fabrics 00:22:35.086 rmmod nvme_keyring 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3071248 ']' 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3071248 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3071248 ']' 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3071248 00:22:35.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3071248) - No such process 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3071248 is not found' 00:22:35.086 Process with pid 3071248 is not found 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.086 15:39:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.709 00:22:37.709 real 0m9.800s 00:22:37.709 user 0m24.918s 00:22:37.709 sys 0m5.206s 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.709 ************************************ 00:22:37.709 END TEST nvmf_shutdown_tc4 00:22:37.709 ************************************ 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:37.709 00:22:37.709 real 0m41.342s 00:22:37.709 user 1m42.303s 00:22:37.709 sys 0m14.243s 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:37.709 ************************************ 00:22:37.709 END TEST nvmf_shutdown 00:22:37.709 ************************************ 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.709 ************************************ 00:22:37.709 START TEST nvmf_nsid 00:22:37.709 ************************************ 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:37.709 * Looking for test storage... 00:22:37.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.709 --rc genhtml_branch_coverage=1 00:22:37.709 --rc genhtml_function_coverage=1 00:22:37.709 --rc genhtml_legend=1 00:22:37.709 --rc geninfo_all_blocks=1 00:22:37.709 --rc geninfo_unexecuted_blocks=1 00:22:37.709 00:22:37.709 ' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.709 --rc genhtml_branch_coverage=1 00:22:37.709 --rc genhtml_function_coverage=1 00:22:37.709 --rc genhtml_legend=1 00:22:37.709 --rc geninfo_all_blocks=1 00:22:37.709 --rc geninfo_unexecuted_blocks=1 00:22:37.709 00:22:37.709 ' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.709 --rc genhtml_branch_coverage=1 00:22:37.709 --rc genhtml_function_coverage=1 00:22:37.709 --rc genhtml_legend=1 00:22:37.709 --rc geninfo_all_blocks=1 00:22:37.709 --rc geninfo_unexecuted_blocks=1 00:22:37.709 00:22:37.709 ' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.709 --rc genhtml_branch_coverage=1 00:22:37.709 --rc genhtml_function_coverage=1 00:22:37.709 --rc genhtml_legend=1 00:22:37.709 --rc geninfo_all_blocks=1 00:22:37.709 --rc geninfo_unexecuted_blocks=1 00:22:37.709 00:22:37.709 ' 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.709 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.710 15:39:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.004 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.263 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.263 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.263 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.263 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.263 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.263 15:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:43.263 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:43.263 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.263 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:43.264 Found net devices under 0000:86:00.0: cvl_0_0 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:43.264 Found net devices under 0000:86:00.1: cvl_0_1 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.264 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:22:43.523 00:22:43.523 --- 10.0.0.2 ping statistics --- 00:22:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.523 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:22:43.523 00:22:43.523 --- 10.0.0.1 ping statistics --- 00:22:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.523 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3075970 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3075970 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3075970 ']' 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.523 15:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.523 [2024-12-06 15:39:49.375595] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:43.523 [2024-12-06 15:39:49.375638] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.523 [2024-12-06 15:39:49.455143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.523 [2024-12-06 15:39:49.497195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.523 [2024-12-06 15:39:49.497229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.523 [2024-12-06 15:39:49.497236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.523 [2024-12-06 15:39:49.497242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.523 [2024-12-06 15:39:49.497246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.523 [2024-12-06 15:39:49.497807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3076215 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cfa695f4-0a44-4fb1-9625-5293a8c6c1ab 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=29b147b8-4608-45f8-85fc-62d7b94927fd 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=fe537e64-a3fe-487e-8f46-5a6a4d4e9751 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.460 null0 00:22:44.460 null1 00:22:44.460 [2024-12-06 15:39:50.293705] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:44.460 [2024-12-06 15:39:50.293755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076215 ] 00:22:44.460 null2 00:22:44.460 [2024-12-06 15:39:50.300266] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.460 [2024-12-06 15:39:50.324465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3076215 /var/tmp/tgt2.sock 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3076215 ']' 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:44.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.460 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.460 [2024-12-06 15:39:50.366171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.460 [2024-12-06 15:39:50.407187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.718 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.718 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:44.718 15:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:44.977 [2024-12-06 15:39:50.937159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.977 [2024-12-06 15:39:50.953274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:45.235 nvme0n1 nvme0n2 00:22:45.235 nvme1n1 00:22:45.235 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:45.235 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:45.235 15:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:46.172 15:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cfa695f4-0a44-4fb1-9625-5293a8c6c1ab 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:47.110 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cfa695f40a444fb196255293a8c6c1ab 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CFA695F40A444FB196255293A8C6C1AB 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CFA695F40A444FB196255293A8C6C1AB == \C\F\A\6\9\5\F\4\0\A\4\4\4\F\B\1\9\6\2\5\5\2\9\3\A\8\C\6\C\1\A\B ]] 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 29b147b8-4608-45f8-85fc-62d7b94927fd 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=29b147b8460845f885fc62d7b94927fd 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 29B147B8460845F885FC62D7B94927FD 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 29B147B8460845F885FC62D7B94927FD == \2\9\B\1\4\7\B\8\4\6\0\8\4\5\F\8\8\5\F\C\6\2\D\7\B\9\4\9\2\7\F\D ]] 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid fe537e64-a3fe-487e-8f46-5a6a4d4e9751 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fe537e64a3fe487e8f465a6a4d4e9751 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FE537E64A3FE487E8F465A6A4D4E9751 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ FE537E64A3FE487E8F465A6A4D4E9751 == \F\E\5\3\7\E\6\4\A\3\F\E\4\8\7\E\8\F\4\6\5\A\6\A\4\D\4\E\9\7\5\1 ]] 00:22:47.369 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3076215 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3076215 ']' 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3076215 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3076215 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3076215' 00:22:47.629 killing process with pid 3076215 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3076215 00:22:47.629 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3076215 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.888 rmmod nvme_tcp 00:22:47.888 rmmod nvme_fabrics 00:22:47.888 rmmod nvme_keyring 00:22:47.888 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3075970 ']' 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3075970 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3075970 ']' 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3075970 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3075970 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3075970' 00:22:48.148 killing process with pid 3075970 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3075970 00:22:48.148 15:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3075970 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.148 15:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.681 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.681 00:22:50.681 real 0m12.995s 00:22:50.681 user 0m10.349s 00:22:50.681 sys 0m5.572s 00:22:50.681 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.681 15:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:50.681 ************************************ 00:22:50.681 END TEST nvmf_nsid 00:22:50.681 ************************************ 00:22:50.681 15:39:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:50.681 00:22:50.681 real 11m58.334s 00:22:50.681 user 25m32.298s 00:22:50.681 sys 3m45.671s 00:22:50.681 15:39:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.681 15:39:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:50.681 ************************************ 00:22:50.681 END TEST nvmf_target_extra 00:22:50.681 ************************************ 00:22:50.681 15:39:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.681 15:39:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.681 15:39:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.681 15:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.681 ************************************ 00:22:50.681 START TEST nvmf_host 00:22:50.681 ************************************ 00:22:50.681 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.681 * Looking for test storage... 00:22:50.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:50.681 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:50.681 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.682 --rc genhtml_branch_coverage=1 00:22:50.682 --rc genhtml_function_coverage=1 00:22:50.682 --rc genhtml_legend=1 00:22:50.682 --rc geninfo_all_blocks=1 00:22:50.682 --rc geninfo_unexecuted_blocks=1 00:22:50.682 00:22:50.682 ' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.682 --rc genhtml_branch_coverage=1 00:22:50.682 --rc genhtml_function_coverage=1 00:22:50.682 --rc genhtml_legend=1 00:22:50.682 --rc geninfo_all_blocks=1 00:22:50.682 --rc geninfo_unexecuted_blocks=1 00:22:50.682 00:22:50.682 ' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.682 --rc genhtml_branch_coverage=1 00:22:50.682 --rc genhtml_function_coverage=1 00:22:50.682 --rc genhtml_legend=1 00:22:50.682 --rc geninfo_all_blocks=1 00:22:50.682 --rc geninfo_unexecuted_blocks=1 00:22:50.682 00:22:50.682 ' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.682 --rc genhtml_branch_coverage=1 00:22:50.682 --rc genhtml_function_coverage=1 00:22:50.682 --rc genhtml_legend=1 00:22:50.682 --rc geninfo_all_blocks=1 00:22:50.682 --rc geninfo_unexecuted_blocks=1 00:22:50.682 00:22:50.682 ' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.682 15:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.683 ************************************ 00:22:50.683 START TEST nvmf_multicontroller 00:22:50.683 ************************************ 00:22:50.683 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:50.683 * Looking for test storage... 00:22:50.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.683 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:50.683 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:50.683 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.942 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.943 --rc genhtml_branch_coverage=1 00:22:50.943 --rc genhtml_function_coverage=1 00:22:50.943 --rc genhtml_legend=1 00:22:50.943 --rc geninfo_all_blocks=1 00:22:50.943 --rc geninfo_unexecuted_blocks=1 00:22:50.943 00:22:50.943 ' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.943 --rc genhtml_branch_coverage=1 00:22:50.943 --rc genhtml_function_coverage=1 00:22:50.943 --rc genhtml_legend=1 00:22:50.943 --rc geninfo_all_blocks=1 00:22:50.943 --rc geninfo_unexecuted_blocks=1 00:22:50.943 00:22:50.943 ' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.943 --rc genhtml_branch_coverage=1 00:22:50.943 --rc genhtml_function_coverage=1 00:22:50.943 --rc genhtml_legend=1 00:22:50.943 --rc geninfo_all_blocks=1 00:22:50.943 --rc geninfo_unexecuted_blocks=1 00:22:50.943 00:22:50.943 ' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:50.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.943 --rc genhtml_branch_coverage=1 00:22:50.943 --rc genhtml_function_coverage=1 00:22:50.943 --rc genhtml_legend=1 00:22:50.943 --rc geninfo_all_blocks=1 00:22:50.943 --rc geninfo_unexecuted_blocks=1 00:22:50.943 00:22:50.943 ' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.943 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.944 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.944 15:39:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.515 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.516 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.516 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.516 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.516 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:22:57.516 00:22:57.516 --- 10.0.0.2 ping statistics --- 00:22:57.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.516 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:22:57.516 00:22:57.516 --- 10.0.0.1 ping statistics --- 00:22:57.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.516 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3080502 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3080502 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3080502 ']' 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.516 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.516 [2024-12-06 15:40:02.732200] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:57.516 [2024-12-06 15:40:02.732252] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.516 [2024-12-06 15:40:02.811903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:57.516 [2024-12-06 15:40:02.853789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.516 [2024-12-06 15:40:02.853826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.517 [2024-12-06 15:40:02.853833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.517 [2024-12-06 15:40:02.853839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.517 [2024-12-06 15:40:02.853844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.517 [2024-12-06 15:40:02.855260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.517 [2024-12-06 15:40:02.855364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.517 [2024-12-06 15:40:02.855365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 [2024-12-06 15:40:02.991747] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 Malloc0 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 [2024-12-06 15:40:03.054828] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 [2024-12-06 15:40:03.062766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 Malloc1 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3080543 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3080543 /var/tmp/bdevperf.sock 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3080543 ']' 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.517 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.777 NVMe0n1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.777 1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.777 request: 00:22:57.777 { 00:22:57.777 "name": "NVMe0", 00:22:57.777 "trtype": "tcp", 00:22:57.777 "traddr": "10.0.0.2", 00:22:57.777 "adrfam": "ipv4", 00:22:57.777 "trsvcid": "4420", 00:22:57.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.777 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:57.777 "hostaddr": "10.0.0.1", 00:22:57.777 "prchk_reftag": false, 00:22:57.777 "prchk_guard": false, 00:22:57.777 "hdgst": false, 00:22:57.777 "ddgst": false, 00:22:57.777 "allow_unrecognized_csi": false, 00:22:57.777 "method": "bdev_nvme_attach_controller", 00:22:57.777 "req_id": 1 00:22:57.777 } 00:22:57.777 Got JSON-RPC error response 00:22:57.777 response: 00:22:57.777 { 00:22:57.777 "code": -114, 00:22:57.777 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.777 } 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.777 request: 00:22:57.777 { 00:22:57.777 "name": "NVMe0", 00:22:57.777 "trtype": "tcp", 00:22:57.777 "traddr": "10.0.0.2", 00:22:57.777 "adrfam": "ipv4", 00:22:57.777 "trsvcid": "4420", 00:22:57.777 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.777 "hostaddr": "10.0.0.1", 00:22:57.777 "prchk_reftag": false, 00:22:57.777 "prchk_guard": false, 00:22:57.777 "hdgst": false, 00:22:57.777 "ddgst": false, 00:22:57.777 "allow_unrecognized_csi": false, 00:22:57.777 "method": "bdev_nvme_attach_controller", 00:22:57.777 "req_id": 1 00:22:57.777 } 00:22:57.777 Got JSON-RPC error response 00:22:57.777 response: 00:22:57.777 { 00:22:57.777 "code": -114, 00:22:57.777 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.777 } 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.777 request: 00:22:57.777 { 00:22:57.777 "name": "NVMe0", 00:22:57.777 "trtype": "tcp", 00:22:57.777 "traddr": "10.0.0.2", 00:22:57.777 "adrfam": "ipv4", 00:22:57.777 "trsvcid": "4420", 00:22:57.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.777 "hostaddr": "10.0.0.1", 00:22:57.777 "prchk_reftag": false, 00:22:57.777 "prchk_guard": false, 00:22:57.777 "hdgst": false, 00:22:57.777 "ddgst": false, 00:22:57.777 "multipath": "disable", 00:22:57.777 "allow_unrecognized_csi": false, 00:22:57.777 "method": "bdev_nvme_attach_controller", 00:22:57.777 "req_id": 1 00:22:57.777 } 00:22:57.777 Got JSON-RPC error response 00:22:57.777 response: 00:22:57.777 { 00:22:57.777 "code": -114, 00:22:57.777 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:57.777 } 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.777 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.777 request: 00:22:57.777 { 00:22:57.777 "name": "NVMe0", 00:22:57.777 "trtype": "tcp", 00:22:57.777 "traddr": "10.0.0.2", 00:22:57.777 "adrfam": "ipv4", 00:22:57.777 "trsvcid": "4420", 00:22:57.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.777 "hostaddr": "10.0.0.1", 00:22:57.777 "prchk_reftag": false, 00:22:57.777 "prchk_guard": false, 00:22:57.777 "hdgst": false, 00:22:57.777 "ddgst": false, 00:22:57.777 "multipath": "failover", 00:22:57.777 "allow_unrecognized_csi": false, 00:22:57.777 "method": "bdev_nvme_attach_controller", 00:22:57.778 "req_id": 1 00:22:57.778 } 00:22:57.778 Got JSON-RPC error response 00:22:57.778 response: 00:22:57.778 { 00:22:57.778 "code": -114, 00:22:57.778 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.778 } 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.778 NVMe0n1 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.778 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.036 00:22:58.036 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.036 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.036 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:58.036 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.036 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.036 15:40:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.036 15:40:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:58.036 15:40:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.428 { 00:22:59.428 "results": [ 00:22:59.428 { 00:22:59.428 "job": "NVMe0n1", 00:22:59.428 "core_mask": "0x1", 00:22:59.428 "workload": "write", 00:22:59.428 "status": "finished", 00:22:59.428 "queue_depth": 128, 00:22:59.428 "io_size": 4096, 00:22:59.428 "runtime": 1.003712, 00:22:59.428 "iops": 25202.4485111267, 00:22:59.428 "mibps": 98.44706449658867, 00:22:59.428 "io_failed": 0, 00:22:59.428 "io_timeout": 0, 00:22:59.428 "avg_latency_us": 5072.468400349387, 00:22:59.428 "min_latency_us": 1724.2209523809524, 00:22:59.428 "max_latency_us": 8738.133333333333 00:22:59.428 } 00:22:59.428 ], 00:22:59.428 "core_count": 1 00:22:59.428 } 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3080543 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3080543 ']' 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3080543 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3080543 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3080543' 00:22:59.428 killing process with pid 3080543 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3080543 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3080543 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:59.428 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:59.428 [2024-12-06 15:40:03.165422] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:22:59.428 [2024-12-06 15:40:03.165471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080543 ] 00:22:59.428 [2024-12-06 15:40:03.237866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.428 [2024-12-06 15:40:03.280143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.428 [2024-12-06 15:40:03.977954] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name a5c5aa03-61da-422a-aca7-1baea79da966 already exists 00:22:59.428 [2024-12-06 15:40:03.977981] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:a5c5aa03-61da-422a-aca7-1baea79da966 alias for bdev NVMe1n1 00:22:59.428 [2024-12-06 15:40:03.977989] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:59.428 Running I/O for 1 seconds... 00:22:59.428 25168.00 IOPS, 98.31 MiB/s 00:22:59.428 Latency(us) 00:22:59.428 [2024-12-06T14:40:05.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.428 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:59.428 NVMe0n1 : 1.00 25202.45 98.45 0.00 0.00 5072.47 1724.22 8738.13 00:22:59.428 [2024-12-06T14:40:05.426Z] =================================================================================================================== 00:22:59.428 [2024-12-06T14:40:05.426Z] Total : 25202.45 98.45 0.00 0.00 5072.47 1724.22 8738.13 00:22:59.428 Received shutdown signal, test time was about 1.000000 seconds 00:22:59.428 00:22:59.428 Latency(us) 00:22:59.428 [2024-12-06T14:40:05.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.428 [2024-12-06T14:40:05.426Z] =================================================================================================================== 00:22:59.428 [2024-12-06T14:40:05.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.428 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.428 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.428 rmmod nvme_tcp 00:22:59.428 rmmod nvme_fabrics 00:22:59.686 rmmod nvme_keyring 00:22:59.686 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.686 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:59.686 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3080502 ']' 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3080502 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3080502 ']' 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3080502 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3080502 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3080502' 00:22:59.687 killing process with pid 3080502 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3080502 00:22:59.687 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3080502 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.946 15:40:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.854 15:40:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.854 00:23:01.854 real 0m11.265s 00:23:01.854 user 0m12.637s 00:23:01.854 sys 0m5.232s 00:23:01.854 15:40:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.854 15:40:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.854 ************************************ 00:23:01.854 END TEST nvmf_multicontroller 00:23:01.854 ************************************ 00:23:01.854 15:40:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:01.854 15:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:01.854 15:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.854 15:40:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.112 ************************************ 00:23:02.112 START TEST nvmf_aer 00:23:02.112 ************************************ 00:23:02.112 15:40:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:02.112 * Looking for test storage... 00:23:02.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.112 15:40:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.112 15:40:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.112 15:40:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:02.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.112 --rc genhtml_branch_coverage=1 00:23:02.112 --rc genhtml_function_coverage=1 00:23:02.112 --rc genhtml_legend=1 00:23:02.112 --rc geninfo_all_blocks=1 00:23:02.112 --rc geninfo_unexecuted_blocks=1 00:23:02.112 00:23:02.112 ' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:02.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.112 --rc genhtml_branch_coverage=1 00:23:02.112 --rc genhtml_function_coverage=1 00:23:02.112 --rc genhtml_legend=1 00:23:02.112 --rc geninfo_all_blocks=1 00:23:02.112 --rc geninfo_unexecuted_blocks=1 00:23:02.112 00:23:02.112 ' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:02.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.112 --rc genhtml_branch_coverage=1 00:23:02.112 --rc genhtml_function_coverage=1 00:23:02.112 --rc genhtml_legend=1 00:23:02.112 --rc geninfo_all_blocks=1 00:23:02.112 --rc geninfo_unexecuted_blocks=1 00:23:02.112 00:23:02.112 ' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:02.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.112 --rc genhtml_branch_coverage=1 00:23:02.112 --rc genhtml_function_coverage=1 00:23:02.112 --rc genhtml_legend=1 00:23:02.112 --rc geninfo_all_blocks=1 00:23:02.112 --rc geninfo_unexecuted_blocks=1 00:23:02.112 00:23:02.112 ' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:02.112 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.113 15:40:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:08.678 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:08.678 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:08.678 Found net devices under 0000:86:00.0: cvl_0_0 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:08.678 Found net devices under 0000:86:00.1: cvl_0_1 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.678 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.679 15:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:23:08.679 00:23:08.679 --- 10.0.0.2 ping statistics --- 00:23:08.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.679 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:08.679 00:23:08.679 --- 10.0.0.1 ping statistics --- 00:23:08.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.679 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3084459 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3084459 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3084459 ']' 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 [2024-12-06 15:40:14.107794] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:23:08.679 [2024-12-06 15:40:14.107843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.679 [2024-12-06 15:40:14.185551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.679 [2024-12-06 15:40:14.228193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.679 [2024-12-06 15:40:14.228229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.679 [2024-12-06 15:40:14.228237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.679 [2024-12-06 15:40:14.228243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.679 [2024-12-06 15:40:14.228249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.679 [2024-12-06 15:40:14.229727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.679 [2024-12-06 15:40:14.229757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.679 [2024-12-06 15:40:14.229862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.679 [2024-12-06 15:40:14.229864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 [2024-12-06 15:40:14.368341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 Malloc0 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 [2024-12-06 15:40:14.428163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.679 [ 00:23:08.679 { 00:23:08.679 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.679 "subtype": "Discovery", 00:23:08.679 "listen_addresses": [], 00:23:08.679 "allow_any_host": true, 00:23:08.679 "hosts": [] 00:23:08.679 }, 00:23:08.679 { 00:23:08.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.679 "subtype": "NVMe", 00:23:08.679 "listen_addresses": [ 00:23:08.679 { 00:23:08.679 "trtype": "TCP", 00:23:08.679 "adrfam": "IPv4", 00:23:08.679 "traddr": "10.0.0.2", 00:23:08.679 "trsvcid": "4420" 00:23:08.679 } 00:23:08.679 ], 00:23:08.679 "allow_any_host": true, 00:23:08.679 "hosts": [], 00:23:08.679 "serial_number": "SPDK00000000000001", 00:23:08.679 "model_number": "SPDK bdev Controller", 00:23:08.679 "max_namespaces": 2, 00:23:08.679 "min_cntlid": 1, 00:23:08.679 "max_cntlid": 65519, 00:23:08.679 "namespaces": [ 00:23:08.679 { 00:23:08.679 "nsid": 1, 00:23:08.679 "bdev_name": "Malloc0", 00:23:08.679 "name": "Malloc0", 00:23:08.679 "nguid": "591FD1658DEA44C98052DE55CF615E9F", 00:23:08.679 "uuid": "591fd165-8dea-44c9-8052-de55cf615e9f" 00:23:08.679 } 00:23:08.679 ] 00:23:08.679 } 00:23:08.679 ] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:08.679 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3084562 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:08.680 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.939 Malloc1 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.939 Asynchronous Event Request test 00:23:08.939 Attaching to 10.0.0.2 00:23:08.939 Attached to 10.0.0.2 00:23:08.939 Registering asynchronous event callbacks... 00:23:08.939 Starting namespace attribute notice tests for all controllers... 00:23:08.939 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:08.939 aer_cb - Changed Namespace 00:23:08.939 Cleaning up... 00:23:08.939 [ 00:23:08.939 { 00:23:08.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.939 "subtype": "Discovery", 00:23:08.939 "listen_addresses": [], 00:23:08.939 "allow_any_host": true, 00:23:08.939 "hosts": [] 00:23:08.939 }, 00:23:08.939 { 00:23:08.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.939 "subtype": "NVMe", 00:23:08.939 "listen_addresses": [ 00:23:08.939 { 00:23:08.939 "trtype": "TCP", 00:23:08.939 "adrfam": "IPv4", 00:23:08.939 "traddr": "10.0.0.2", 00:23:08.939 "trsvcid": "4420" 00:23:08.939 } 00:23:08.939 ], 00:23:08.939 "allow_any_host": true, 00:23:08.939 "hosts": [], 00:23:08.939 "serial_number": "SPDK00000000000001", 00:23:08.939 "model_number": "SPDK bdev Controller", 00:23:08.939 "max_namespaces": 2, 00:23:08.939 "min_cntlid": 1, 00:23:08.939 "max_cntlid": 65519, 00:23:08.939 "namespaces": [ 00:23:08.939 { 00:23:08.939 "nsid": 1, 00:23:08.939 "bdev_name": "Malloc0", 00:23:08.939 "name": "Malloc0", 00:23:08.939 "nguid": "591FD1658DEA44C98052DE55CF615E9F", 00:23:08.939 "uuid": "591fd165-8dea-44c9-8052-de55cf615e9f" 00:23:08.939 }, 00:23:08.939 { 00:23:08.939 "nsid": 2, 00:23:08.939 "bdev_name": "Malloc1", 00:23:08.939 "name": "Malloc1", 00:23:08.939 "nguid": "FA22911FEAB3489FBA39004789FFE5CB", 00:23:08.939 "uuid": "fa22911f-eab3-489f-ba39-004789ffe5cb" 00:23:08.939 } 00:23:08.939 ] 00:23:08.939 } 00:23:08.939 ] 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3084562 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:08.939 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.940 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:08.940 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.940 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:08.940 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.940 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.940 rmmod nvme_tcp 00:23:08.940 rmmod nvme_fabrics 00:23:08.940 rmmod nvme_keyring 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3084459 ']' 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3084459 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3084459 ']' 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3084459 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3084459 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3084459' 00:23:09.200 killing process with pid 3084459 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3084459 00:23:09.200 15:40:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3084459 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.200 15:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.734 00:23:11.734 real 0m9.364s 00:23:11.734 user 0m5.471s 00:23:11.734 sys 0m4.887s 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.734 ************************************ 00:23:11.734 END TEST nvmf_aer 00:23:11.734 ************************************ 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.734 ************************************ 00:23:11.734 START TEST nvmf_async_init 00:23:11.734 ************************************ 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.734 * Looking for test storage... 00:23:11.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.734 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.735 --rc genhtml_branch_coverage=1 00:23:11.735 --rc genhtml_function_coverage=1 00:23:11.735 --rc genhtml_legend=1 00:23:11.735 --rc geninfo_all_blocks=1 00:23:11.735 --rc geninfo_unexecuted_blocks=1 00:23:11.735 00:23:11.735 ' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.735 --rc genhtml_branch_coverage=1 00:23:11.735 --rc genhtml_function_coverage=1 00:23:11.735 --rc genhtml_legend=1 00:23:11.735 --rc geninfo_all_blocks=1 00:23:11.735 --rc geninfo_unexecuted_blocks=1 00:23:11.735 00:23:11.735 ' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.735 --rc genhtml_branch_coverage=1 00:23:11.735 --rc genhtml_function_coverage=1 00:23:11.735 --rc genhtml_legend=1 00:23:11.735 --rc geninfo_all_blocks=1 00:23:11.735 --rc geninfo_unexecuted_blocks=1 00:23:11.735 00:23:11.735 ' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.735 --rc genhtml_branch_coverage=1 00:23:11.735 --rc genhtml_function_coverage=1 00:23:11.735 --rc genhtml_legend=1 00:23:11.735 --rc geninfo_all_blocks=1 00:23:11.735 --rc geninfo_unexecuted_blocks=1 00:23:11.735 00:23:11.735 ' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=009440385bd94b8eb462fbfb73f53dcb 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.735 15:40:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.303 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.303 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:18.304 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:18.304 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:18.304 Found net devices under 0000:86:00.0: cvl_0_0 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:18.304 Found net devices under 0000:86:00.1: cvl_0_1 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:23:18.304 00:23:18.304 --- 10.0.0.2 ping statistics --- 00:23:18.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.304 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:18.304 00:23:18.304 --- 10.0.0.1 ping statistics --- 00:23:18.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.304 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.304 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3088093 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3088093 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3088093 ']' 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 [2024-12-06 15:40:23.471954] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:23:18.305 [2024-12-06 15:40:23.472007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.305 [2024-12-06 15:40:23.551310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.305 [2024-12-06 15:40:23.590686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.305 [2024-12-06 15:40:23.590720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.305 [2024-12-06 15:40:23.590730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.305 [2024-12-06 15:40:23.590738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.305 [2024-12-06 15:40:23.590745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.305 [2024-12-06 15:40:23.591357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 [2024-12-06 15:40:23.735566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 null0 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 009440385bd94b8eb462fbfb73f53dcb 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 [2024-12-06 15:40:23.779832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 nvme0n1 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 [ 00:23:18.305 { 00:23:18.305 "name": "nvme0n1", 00:23:18.305 "aliases": [ 00:23:18.305 "00944038-5bd9-4b8e-b462-fbfb73f53dcb" 00:23:18.305 ], 00:23:18.305 "product_name": "NVMe disk", 00:23:18.305 "block_size": 512, 00:23:18.305 "num_blocks": 2097152, 00:23:18.305 "uuid": "00944038-5bd9-4b8e-b462-fbfb73f53dcb", 00:23:18.305 "numa_id": 1, 00:23:18.305 "assigned_rate_limits": { 00:23:18.305 "rw_ios_per_sec": 0, 00:23:18.305 "rw_mbytes_per_sec": 0, 00:23:18.305 "r_mbytes_per_sec": 0, 00:23:18.305 "w_mbytes_per_sec": 0 00:23:18.305 }, 00:23:18.305 "claimed": false, 00:23:18.305 "zoned": false, 00:23:18.305 "supported_io_types": { 00:23:18.305 "read": true, 00:23:18.305 "write": true, 00:23:18.305 "unmap": false, 00:23:18.305 "flush": true, 00:23:18.305 "reset": true, 00:23:18.305 "nvme_admin": true, 00:23:18.305 "nvme_io": true, 00:23:18.305 "nvme_io_md": false, 00:23:18.305 "write_zeroes": true, 00:23:18.305 "zcopy": false, 00:23:18.305 "get_zone_info": false, 00:23:18.305 "zone_management": false, 00:23:18.305 "zone_append": false, 00:23:18.305 "compare": true, 00:23:18.305 "compare_and_write": true, 00:23:18.305 "abort": true, 00:23:18.305 "seek_hole": false, 00:23:18.305 "seek_data": false, 00:23:18.305 "copy": true, 00:23:18.305 "nvme_iov_md": false 00:23:18.305 }, 00:23:18.305 "memory_domains": [ 00:23:18.305 { 00:23:18.305 "dma_device_id": "system", 00:23:18.305 "dma_device_type": 1 00:23:18.305 } 00:23:18.305 ], 00:23:18.305 "driver_specific": { 00:23:18.305 "nvme": [ 00:23:18.305 { 00:23:18.305 "trid": { 00:23:18.305 "trtype": "TCP", 00:23:18.305 "adrfam": "IPv4", 00:23:18.305 "traddr": "10.0.0.2", 00:23:18.305 "trsvcid": "4420", 00:23:18.305 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.305 }, 00:23:18.305 "ctrlr_data": { 00:23:18.305 "cntlid": 1, 00:23:18.305 "vendor_id": "0x8086", 00:23:18.305 "model_number": "SPDK bdev Controller", 00:23:18.305 "serial_number": "00000000000000000000", 00:23:18.305 "firmware_revision": "25.01", 00:23:18.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.305 "oacs": { 00:23:18.305 "security": 0, 00:23:18.305 "format": 0, 00:23:18.305 "firmware": 0, 00:23:18.305 "ns_manage": 0 00:23:18.305 }, 00:23:18.305 "multi_ctrlr": true, 00:23:18.305 "ana_reporting": false 00:23:18.305 }, 00:23:18.305 "vs": { 00:23:18.305 "nvme_version": "1.3" 00:23:18.305 }, 00:23:18.305 "ns_data": { 00:23:18.305 "id": 1, 00:23:18.305 "can_share": true 00:23:18.305 } 00:23:18.305 } 00:23:18.305 ], 00:23:18.305 "mp_policy": "active_passive" 00:23:18.305 } 00:23:18.305 } 00:23:18.305 ] 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 [2024-12-06 15:40:24.044423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:18.305 [2024-12-06 15:40:24.044484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6edc00 (9): Bad file descriptor 00:23:18.305 [2024-12-06 15:40:24.176447] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.305 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.305 [ 00:23:18.305 { 00:23:18.305 "name": "nvme0n1", 00:23:18.305 "aliases": [ 00:23:18.306 "00944038-5bd9-4b8e-b462-fbfb73f53dcb" 00:23:18.306 ], 00:23:18.306 "product_name": "NVMe disk", 00:23:18.306 "block_size": 512, 00:23:18.306 "num_blocks": 2097152, 00:23:18.306 "uuid": "00944038-5bd9-4b8e-b462-fbfb73f53dcb", 00:23:18.306 "numa_id": 1, 00:23:18.306 "assigned_rate_limits": { 00:23:18.306 "rw_ios_per_sec": 0, 00:23:18.306 "rw_mbytes_per_sec": 0, 00:23:18.306 "r_mbytes_per_sec": 0, 00:23:18.306 "w_mbytes_per_sec": 0 00:23:18.306 }, 00:23:18.306 "claimed": false, 00:23:18.306 "zoned": false, 00:23:18.306 "supported_io_types": { 00:23:18.306 "read": true, 00:23:18.306 "write": true, 00:23:18.306 "unmap": false, 00:23:18.306 "flush": true, 00:23:18.306 "reset": true, 00:23:18.306 "nvme_admin": true, 00:23:18.306 "nvme_io": true, 00:23:18.306 "nvme_io_md": false, 00:23:18.306 "write_zeroes": true, 00:23:18.306 "zcopy": false, 00:23:18.306 "get_zone_info": false, 00:23:18.306 "zone_management": false, 00:23:18.306 "zone_append": false, 00:23:18.306 "compare": true, 00:23:18.306 "compare_and_write": true, 00:23:18.306 "abort": true, 00:23:18.306 "seek_hole": false, 00:23:18.306 "seek_data": false, 00:23:18.306 "copy": true, 00:23:18.306 "nvme_iov_md": false 00:23:18.306 }, 00:23:18.306 "memory_domains": [ 00:23:18.306 { 00:23:18.306 "dma_device_id": "system", 00:23:18.306 "dma_device_type": 1 00:23:18.306 } 00:23:18.306 ], 00:23:18.306 "driver_specific": { 00:23:18.306 "nvme": [ 00:23:18.306 { 00:23:18.306 "trid": { 00:23:18.306 "trtype": "TCP", 00:23:18.306 "adrfam": "IPv4", 00:23:18.306 "traddr": "10.0.0.2", 00:23:18.306 "trsvcid": "4420", 00:23:18.306 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.306 }, 00:23:18.306 "ctrlr_data": { 00:23:18.306 "cntlid": 2, 00:23:18.306 "vendor_id": "0x8086", 00:23:18.306 "model_number": "SPDK bdev Controller", 00:23:18.306 "serial_number": "00000000000000000000", 00:23:18.306 "firmware_revision": "25.01", 00:23:18.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.306 "oacs": { 00:23:18.306 "security": 0, 00:23:18.306 "format": 0, 00:23:18.306 "firmware": 0, 00:23:18.306 "ns_manage": 0 00:23:18.306 }, 00:23:18.306 "multi_ctrlr": true, 00:23:18.306 "ana_reporting": false 00:23:18.306 }, 00:23:18.306 "vs": { 00:23:18.306 "nvme_version": "1.3" 00:23:18.306 }, 00:23:18.306 "ns_data": { 00:23:18.306 "id": 1, 00:23:18.306 "can_share": true 00:23:18.306 } 00:23:18.306 } 00:23:18.306 ], 00:23:18.306 "mp_policy": "active_passive" 00:23:18.306 } 00:23:18.306 } 00:23:18.306 ] 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9FrFkB7Bpk 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9FrFkB7Bpk 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.9FrFkB7Bpk 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.306 [2024-12-06 15:40:24.249040] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.306 [2024-12-06 15:40:24.249172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.306 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.306 [2024-12-06 15:40:24.265098] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.565 nvme0n1 00:23:18.565 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.565 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.565 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.565 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.565 [ 00:23:18.565 { 00:23:18.565 "name": "nvme0n1", 00:23:18.565 "aliases": [ 00:23:18.565 "00944038-5bd9-4b8e-b462-fbfb73f53dcb" 00:23:18.565 ], 00:23:18.565 "product_name": "NVMe disk", 00:23:18.565 "block_size": 512, 00:23:18.565 "num_blocks": 2097152, 00:23:18.565 "uuid": "00944038-5bd9-4b8e-b462-fbfb73f53dcb", 00:23:18.565 "numa_id": 1, 00:23:18.565 "assigned_rate_limits": { 00:23:18.565 "rw_ios_per_sec": 0, 00:23:18.565 "rw_mbytes_per_sec": 0, 00:23:18.565 "r_mbytes_per_sec": 0, 00:23:18.565 "w_mbytes_per_sec": 0 00:23:18.565 }, 00:23:18.565 "claimed": false, 00:23:18.565 "zoned": false, 00:23:18.565 "supported_io_types": { 00:23:18.565 "read": true, 00:23:18.565 "write": true, 00:23:18.565 "unmap": false, 00:23:18.565 "flush": true, 00:23:18.565 "reset": true, 00:23:18.565 "nvme_admin": true, 00:23:18.565 "nvme_io": true, 00:23:18.565 "nvme_io_md": false, 00:23:18.565 "write_zeroes": true, 00:23:18.565 "zcopy": false, 00:23:18.565 "get_zone_info": false, 00:23:18.565 "zone_management": false, 00:23:18.565 "zone_append": false, 00:23:18.565 "compare": true, 00:23:18.565 "compare_and_write": true, 00:23:18.565 "abort": true, 00:23:18.565 "seek_hole": false, 00:23:18.565 "seek_data": false, 00:23:18.565 "copy": true, 00:23:18.565 "nvme_iov_md": false 00:23:18.565 }, 00:23:18.565 "memory_domains": [ 00:23:18.565 { 00:23:18.565 "dma_device_id": "system", 00:23:18.565 "dma_device_type": 1 00:23:18.565 } 00:23:18.565 ], 00:23:18.565 "driver_specific": { 00:23:18.566 "nvme": [ 00:23:18.566 { 00:23:18.566 "trid": { 00:23:18.566 "trtype": "TCP", 00:23:18.566 "adrfam": "IPv4", 00:23:18.566 "traddr": "10.0.0.2", 00:23:18.566 "trsvcid": "4421", 00:23:18.566 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.566 }, 00:23:18.566 "ctrlr_data": { 00:23:18.566 "cntlid": 3, 00:23:18.566 "vendor_id": "0x8086", 00:23:18.566 "model_number": "SPDK bdev Controller", 00:23:18.566 "serial_number": "00000000000000000000", 00:23:18.566 "firmware_revision": "25.01", 00:23:18.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.566 "oacs": { 00:23:18.566 "security": 0, 00:23:18.566 "format": 0, 00:23:18.566 "firmware": 0, 00:23:18.566 "ns_manage": 0 00:23:18.566 }, 00:23:18.566 "multi_ctrlr": true, 00:23:18.566 "ana_reporting": false 00:23:18.566 }, 00:23:18.566 "vs": { 00:23:18.566 "nvme_version": "1.3" 00:23:18.566 }, 00:23:18.566 "ns_data": { 00:23:18.566 "id": 1, 00:23:18.566 "can_share": true 00:23:18.566 } 00:23:18.566 } 00:23:18.566 ], 00:23:18.566 "mp_policy": "active_passive" 00:23:18.566 } 00:23:18.566 } 00:23:18.566 ] 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.9FrFkB7Bpk 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.566 rmmod nvme_tcp 00:23:18.566 rmmod nvme_fabrics 00:23:18.566 rmmod nvme_keyring 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3088093 ']' 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3088093 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3088093 ']' 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3088093 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3088093 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3088093' 00:23:18.566 killing process with pid 3088093 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3088093 00:23:18.566 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3088093 00:23:18.824 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:18.824 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:18.824 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.825 15:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.765 15:40:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:20.765 00:23:20.765 real 0m9.405s 00:23:20.765 user 0m2.971s 00:23:20.765 sys 0m4.852s 00:23:20.765 15:40:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.765 15:40:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.765 ************************************ 00:23:20.765 END TEST nvmf_async_init 00:23:20.765 ************************************ 00:23:20.765 15:40:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:20.765 15:40:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:20.765 15:40:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.765 15:40:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.024 ************************************ 00:23:21.024 START TEST dma 00:23:21.024 ************************************ 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:21.024 * Looking for test storage... 00:23:21.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.024 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.024 --rc genhtml_branch_coverage=1 00:23:21.024 --rc genhtml_function_coverage=1 00:23:21.024 --rc genhtml_legend=1 00:23:21.024 --rc geninfo_all_blocks=1 00:23:21.024 --rc geninfo_unexecuted_blocks=1 00:23:21.025 00:23:21.025 ' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.025 --rc genhtml_branch_coverage=1 00:23:21.025 --rc genhtml_function_coverage=1 00:23:21.025 --rc genhtml_legend=1 00:23:21.025 --rc geninfo_all_blocks=1 00:23:21.025 --rc geninfo_unexecuted_blocks=1 00:23:21.025 00:23:21.025 ' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.025 --rc genhtml_branch_coverage=1 00:23:21.025 --rc genhtml_function_coverage=1 00:23:21.025 --rc genhtml_legend=1 00:23:21.025 --rc geninfo_all_blocks=1 00:23:21.025 --rc geninfo_unexecuted_blocks=1 00:23:21.025 00:23:21.025 ' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.025 --rc genhtml_branch_coverage=1 00:23:21.025 --rc genhtml_function_coverage=1 00:23:21.025 --rc genhtml_legend=1 00:23:21.025 --rc geninfo_all_blocks=1 00:23:21.025 --rc geninfo_unexecuted_blocks=1 00:23:21.025 00:23:21.025 ' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:21.025 00:23:21.025 real 0m0.206s 00:23:21.025 user 0m0.133s 00:23:21.025 sys 0m0.087s 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.025 15:40:26 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:21.025 ************************************ 00:23:21.025 END TEST dma 00:23:21.025 ************************************ 00:23:21.283 15:40:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:21.283 15:40:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.283 15:40:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.283 15:40:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.283 ************************************ 00:23:21.283 START TEST nvmf_identify 00:23:21.283 ************************************ 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:21.284 * Looking for test storage... 00:23:21.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.284 --rc genhtml_branch_coverage=1 00:23:21.284 --rc genhtml_function_coverage=1 00:23:21.284 --rc genhtml_legend=1 00:23:21.284 --rc geninfo_all_blocks=1 00:23:21.284 --rc geninfo_unexecuted_blocks=1 00:23:21.284 00:23:21.284 ' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.284 --rc genhtml_branch_coverage=1 00:23:21.284 --rc genhtml_function_coverage=1 00:23:21.284 --rc genhtml_legend=1 00:23:21.284 --rc geninfo_all_blocks=1 00:23:21.284 --rc geninfo_unexecuted_blocks=1 00:23:21.284 00:23:21.284 ' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.284 --rc genhtml_branch_coverage=1 00:23:21.284 --rc genhtml_function_coverage=1 00:23:21.284 --rc genhtml_legend=1 00:23:21.284 --rc geninfo_all_blocks=1 00:23:21.284 --rc geninfo_unexecuted_blocks=1 00:23:21.284 00:23:21.284 ' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.284 --rc genhtml_branch_coverage=1 00:23:21.284 --rc genhtml_function_coverage=1 00:23:21.284 --rc genhtml_legend=1 00:23:21.284 --rc geninfo_all_blocks=1 00:23:21.284 --rc geninfo_unexecuted_blocks=1 00:23:21.284 00:23:21.284 ' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.284 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.285 15:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:27.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:27.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.850 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:27.851 Found net devices under 0000:86:00.0: cvl_0_0 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:27.851 Found net devices under 0000:86:00.1: cvl_0_1 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.851 15:40:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:23:27.851 00:23:27.851 --- 10.0.0.2 ping statistics --- 00:23:27.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.851 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:23:27.851 00:23:27.851 --- 10.0.0.1 ping statistics --- 00:23:27.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.851 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3091914 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3091914 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3091914 ']' 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.851 [2024-12-06 15:40:33.259450] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:23:27.851 [2024-12-06 15:40:33.259496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.851 [2024-12-06 15:40:33.335449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.851 [2024-12-06 15:40:33.376337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.851 [2024-12-06 15:40:33.376380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.851 [2024-12-06 15:40:33.376390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.851 [2024-12-06 15:40:33.376398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.851 [2024-12-06 15:40:33.376403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.851 [2024-12-06 15:40:33.378059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.851 [2024-12-06 15:40:33.378170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.851 [2024-12-06 15:40:33.378275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.851 [2024-12-06 15:40:33.378277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.851 [2024-12-06 15:40:33.492505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.851 Malloc0 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.851 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.852 [2024-12-06 15:40:33.599551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:27.852 [ 00:23:27.852 { 00:23:27.852 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:27.852 "subtype": "Discovery", 00:23:27.852 "listen_addresses": [ 00:23:27.852 { 00:23:27.852 "trtype": "TCP", 00:23:27.852 "adrfam": "IPv4", 00:23:27.852 "traddr": "10.0.0.2", 00:23:27.852 "trsvcid": "4420" 00:23:27.852 } 00:23:27.852 ], 00:23:27.852 "allow_any_host": true, 00:23:27.852 "hosts": [] 00:23:27.852 }, 00:23:27.852 { 00:23:27.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.852 "subtype": "NVMe", 00:23:27.852 "listen_addresses": [ 00:23:27.852 { 00:23:27.852 "trtype": "TCP", 00:23:27.852 "adrfam": "IPv4", 00:23:27.852 "traddr": "10.0.0.2", 00:23:27.852 "trsvcid": "4420" 00:23:27.852 } 00:23:27.852 ], 00:23:27.852 "allow_any_host": true, 00:23:27.852 "hosts": [], 00:23:27.852 "serial_number": "SPDK00000000000001", 00:23:27.852 "model_number": "SPDK bdev Controller", 00:23:27.852 "max_namespaces": 32, 00:23:27.852 "min_cntlid": 1, 00:23:27.852 "max_cntlid": 65519, 00:23:27.852 "namespaces": [ 00:23:27.852 { 00:23:27.852 "nsid": 1, 00:23:27.852 "bdev_name": "Malloc0", 00:23:27.852 "name": "Malloc0", 00:23:27.852 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:27.852 "eui64": "ABCDEF0123456789", 00:23:27.852 "uuid": "8e6f86fc-19c8-4f8c-a027-e8f4dad93dc1" 00:23:27.852 } 00:23:27.852 ] 00:23:27.852 } 00:23:27.852 ] 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.852 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:27.852 [2024-12-06 15:40:33.649621] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:23:27.852 [2024-12-06 15:40:33.649662] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091937 ] 00:23:27.852 [2024-12-06 15:40:33.689883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:27.852 [2024-12-06 15:40:33.689933] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:27.852 [2024-12-06 15:40:33.689939] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:27.852 [2024-12-06 15:40:33.689952] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:27.852 [2024-12-06 15:40:33.689960] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:27.852 [2024-12-06 15:40:33.693683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:27.852 [2024-12-06 15:40:33.693721] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11c3690 0 00:23:27.852 [2024-12-06 15:40:33.701378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:27.852 [2024-12-06 15:40:33.701393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:27.852 [2024-12-06 15:40:33.701398] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:27.852 [2024-12-06 15:40:33.701401] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:27.852 [2024-12-06 15:40:33.701435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.701442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.701445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.852 [2024-12-06 15:40:33.701458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:27.852 [2024-12-06 15:40:33.701475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.852 [2024-12-06 15:40:33.709376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.852 [2024-12-06 15:40:33.709384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.852 [2024-12-06 15:40:33.709387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.852 [2024-12-06 15:40:33.709401] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:27.852 [2024-12-06 15:40:33.709407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:27.852 [2024-12-06 15:40:33.709412] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:27.852 [2024-12-06 15:40:33.709425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.852 [2024-12-06 15:40:33.709438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.852 [2024-12-06 15:40:33.709452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.852 [2024-12-06 15:40:33.709595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.852 [2024-12-06 15:40:33.709601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.852 [2024-12-06 15:40:33.709604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.852 [2024-12-06 15:40:33.709613] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:27.852 [2024-12-06 15:40:33.709619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:27.852 [2024-12-06 15:40:33.709626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.852 [2024-12-06 15:40:33.709638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.852 [2024-12-06 15:40:33.709648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.852 [2024-12-06 15:40:33.709707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.852 [2024-12-06 15:40:33.709713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.852 [2024-12-06 15:40:33.709716] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.852 [2024-12-06 15:40:33.709724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:27.852 [2024-12-06 15:40:33.709733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:27.852 [2024-12-06 15:40:33.709739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.852 [2024-12-06 15:40:33.709752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.852 [2024-12-06 15:40:33.709761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.852 [2024-12-06 15:40:33.709825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.852 [2024-12-06 15:40:33.709830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.852 [2024-12-06 15:40:33.709833] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709836] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.852 [2024-12-06 15:40:33.709841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:27.852 [2024-12-06 15:40:33.709849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.852 [2024-12-06 15:40:33.709861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.852 [2024-12-06 15:40:33.709870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.852 [2024-12-06 15:40:33.709943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.852 [2024-12-06 15:40:33.709949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.852 [2024-12-06 15:40:33.709952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.852 [2024-12-06 15:40:33.709955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.852 [2024-12-06 15:40:33.709959] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:27.852 [2024-12-06 15:40:33.709964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:27.852 [2024-12-06 15:40:33.709971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:27.852 [2024-12-06 15:40:33.710081] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:27.853 [2024-12-06 15:40:33.710085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:27.853 [2024-12-06 15:40:33.710092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.710104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.853 [2024-12-06 15:40:33.710115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.853 [2024-12-06 15:40:33.710173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.853 [2024-12-06 15:40:33.710179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.853 [2024-12-06 15:40:33.710184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.853 [2024-12-06 15:40:33.710192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:27.853 [2024-12-06 15:40:33.710200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.710212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.853 [2024-12-06 15:40:33.710221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.853 [2024-12-06 15:40:33.710290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.853 [2024-12-06 15:40:33.710296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.853 [2024-12-06 15:40:33.710299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.853 [2024-12-06 15:40:33.710306] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:27.853 [2024-12-06 15:40:33.710310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:27.853 [2024-12-06 15:40:33.710317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:27.853 [2024-12-06 15:40:33.710324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:27.853 [2024-12-06 15:40:33.710332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.710342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.853 [2024-12-06 15:40:33.710351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.853 [2024-12-06 15:40:33.710443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:27.853 [2024-12-06 15:40:33.710450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:27.853 [2024-12-06 15:40:33.710453] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710457] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c3690): datao=0, datal=4096, cccid=0 00:23:27.853 [2024-12-06 15:40:33.710461] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1225100) on tqpair(0x11c3690): expected_datao=0, payload_size=4096 00:23:27.853 [2024-12-06 15:40:33.710466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710477] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.710481] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.853 [2024-12-06 15:40:33.751489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.853 [2024-12-06 15:40:33.751492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.853 [2024-12-06 15:40:33.751503] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:27.853 [2024-12-06 15:40:33.751513] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:27.853 [2024-12-06 15:40:33.751517] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:27.853 [2024-12-06 15:40:33.751522] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:27.853 [2024-12-06 15:40:33.751526] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:27.853 [2024-12-06 15:40:33.751530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:27.853 [2024-12-06 15:40:33.751539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:27.853 [2024-12-06 15:40:33.751546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.751560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:27.853 [2024-12-06 15:40:33.751571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.853 [2024-12-06 15:40:33.751646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.853 [2024-12-06 15:40:33.751652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.853 [2024-12-06 15:40:33.751655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:27.853 [2024-12-06 15:40:33.751665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.751677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.853 [2024-12-06 15:40:33.751682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.751693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.853 [2024-12-06 15:40:33.751698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.751710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.853 [2024-12-06 15:40:33.751715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.751726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.853 [2024-12-06 15:40:33.751730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:27.853 [2024-12-06 15:40:33.751743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:27.853 [2024-12-06 15:40:33.751749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c3690) 00:23:27.853 [2024-12-06 15:40:33.751758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.853 [2024-12-06 15:40:33.751769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225100, cid 0, qid 0 00:23:27.853 [2024-12-06 15:40:33.751774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225280, cid 1, qid 0 00:23:27.853 [2024-12-06 15:40:33.751778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225400, cid 2, qid 0 00:23:27.853 [2024-12-06 15:40:33.751782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:27.853 [2024-12-06 15:40:33.751785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225700, cid 4, qid 0 00:23:27.853 [2024-12-06 15:40:33.751880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.853 [2024-12-06 15:40:33.751886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.853 [2024-12-06 15:40:33.751889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225700) on tqpair=0x11c3690 00:23:27.853 [2024-12-06 15:40:33.751897] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:27.853 [2024-12-06 15:40:33.751902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:27.853 [2024-12-06 15:40:33.751911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.853 [2024-12-06 15:40:33.751915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c3690) 00:23:27.854 [2024-12-06 15:40:33.751920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.854 [2024-12-06 15:40:33.751930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225700, cid 4, qid 0 00:23:27.854 [2024-12-06 15:40:33.752000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:27.854 [2024-12-06 15:40:33.752006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:27.854 [2024-12-06 15:40:33.752009] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752012] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c3690): datao=0, datal=4096, cccid=4 00:23:27.854 [2024-12-06 15:40:33.752016] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1225700) on tqpair(0x11c3690): expected_datao=0, payload_size=4096 00:23:27.854 [2024-12-06 15:40:33.752020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752044] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752048] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.854 [2024-12-06 15:40:33.752092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.854 [2024-12-06 15:40:33.752095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225700) on tqpair=0x11c3690 00:23:27.854 [2024-12-06 15:40:33.752109] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:27.854 [2024-12-06 15:40:33.752129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c3690) 00:23:27.854 [2024-12-06 15:40:33.752141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.854 [2024-12-06 15:40:33.752147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11c3690) 00:23:27.854 [2024-12-06 15:40:33.752158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.854 [2024-12-06 15:40:33.752171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225700, cid 4, qid 0 00:23:27.854 [2024-12-06 15:40:33.752176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225880, cid 5, qid 0 00:23:27.854 [2024-12-06 15:40:33.752270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:27.854 [2024-12-06 15:40:33.752276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:27.854 [2024-12-06 15:40:33.752279] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752282] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c3690): datao=0, datal=1024, cccid=4 00:23:27.854 [2024-12-06 15:40:33.752286] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1225700) on tqpair(0x11c3690): expected_datao=0, payload_size=1024 00:23:27.854 [2024-12-06 15:40:33.752290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752295] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752298] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.854 [2024-12-06 15:40:33.752308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.854 [2024-12-06 15:40:33.752311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.752314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225880) on tqpair=0x11c3690 00:23:27.854 [2024-12-06 15:40:33.794373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.854 [2024-12-06 15:40:33.794384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.854 [2024-12-06 15:40:33.794387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.794390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225700) on tqpair=0x11c3690 00:23:27.854 [2024-12-06 15:40:33.794401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.794405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c3690) 00:23:27.854 [2024-12-06 15:40:33.794412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.854 [2024-12-06 15:40:33.794428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225700, cid 4, qid 0 00:23:27.854 [2024-12-06 15:40:33.794581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:27.854 [2024-12-06 15:40:33.794587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:27.854 [2024-12-06 15:40:33.794590] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.794593] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c3690): datao=0, datal=3072, cccid=4 00:23:27.854 [2024-12-06 15:40:33.794597] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1225700) on tqpair(0x11c3690): expected_datao=0, payload_size=3072 00:23:27.854 [2024-12-06 15:40:33.794601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.794613] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.794617] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.835477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:27.854 [2024-12-06 15:40:33.835485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:27.854 [2024-12-06 15:40:33.835491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.835495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225700) on tqpair=0x11c3690 00:23:27.854 [2024-12-06 15:40:33.835504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.835507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11c3690) 00:23:27.854 [2024-12-06 15:40:33.835514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.854 [2024-12-06 15:40:33.835527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225700, cid 4, qid 0 00:23:27.854 [2024-12-06 15:40:33.835595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:27.854 [2024-12-06 15:40:33.835600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:27.854 [2024-12-06 15:40:33.835603] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.835607] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11c3690): datao=0, datal=8, cccid=4 00:23:27.854 [2024-12-06 15:40:33.835610] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1225700) on tqpair(0x11c3690): expected_datao=0, payload_size=8 00:23:27.854 [2024-12-06 15:40:33.835614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.835620] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:27.854 [2024-12-06 15:40:33.835623] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.118 [2024-12-06 15:40:33.876501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.118 [2024-12-06 15:40:33.876514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.118 [2024-12-06 15:40:33.876518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.118 [2024-12-06 15:40:33.876521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225700) on tqpair=0x11c3690 00:23:28.118 ===================================================== 00:23:28.118 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:28.118 ===================================================== 00:23:28.118 Controller Capabilities/Features 00:23:28.118 ================================ 00:23:28.118 Vendor ID: 0000 00:23:28.118 Subsystem Vendor ID: 0000 00:23:28.118 Serial Number: .................... 00:23:28.118 Model Number: ........................................ 00:23:28.118 Firmware Version: 25.01 00:23:28.118 Recommended Arb Burst: 0 00:23:28.118 IEEE OUI Identifier: 00 00 00 00:23:28.118 Multi-path I/O 00:23:28.118 May have multiple subsystem ports: No 00:23:28.118 May have multiple controllers: No 00:23:28.118 Associated with SR-IOV VF: No 00:23:28.118 Max Data Transfer Size: 131072 00:23:28.118 Max Number of Namespaces: 0 00:23:28.118 Max Number of I/O Queues: 1024 00:23:28.118 NVMe Specification Version (VS): 1.3 00:23:28.118 NVMe Specification Version (Identify): 1.3 00:23:28.118 Maximum Queue Entries: 128 00:23:28.118 Contiguous Queues Required: Yes 00:23:28.118 Arbitration Mechanisms Supported 00:23:28.118 Weighted Round Robin: Not Supported 00:23:28.118 Vendor Specific: Not Supported 00:23:28.118 Reset Timeout: 15000 ms 00:23:28.118 Doorbell Stride: 4 bytes 00:23:28.118 NVM Subsystem Reset: Not Supported 00:23:28.118 Command Sets Supported 00:23:28.118 NVM Command Set: Supported 00:23:28.118 Boot Partition: Not Supported 00:23:28.118 Memory Page Size Minimum: 4096 bytes 00:23:28.118 Memory Page Size Maximum: 4096 bytes 00:23:28.118 Persistent Memory Region: Not Supported 00:23:28.118 Optional Asynchronous Events Supported 00:23:28.118 Namespace Attribute Notices: Not Supported 00:23:28.118 Firmware Activation Notices: Not Supported 00:23:28.118 ANA Change Notices: Not Supported 00:23:28.118 PLE Aggregate Log Change Notices: Not Supported 00:23:28.118 LBA Status Info Alert Notices: Not Supported 00:23:28.118 EGE Aggregate Log Change Notices: Not Supported 00:23:28.118 Normal NVM Subsystem Shutdown event: Not Supported 00:23:28.118 Zone Descriptor Change Notices: Not Supported 00:23:28.118 Discovery Log Change Notices: Supported 00:23:28.118 Controller Attributes 00:23:28.118 128-bit Host Identifier: Not Supported 00:23:28.118 Non-Operational Permissive Mode: Not Supported 00:23:28.118 NVM Sets: Not Supported 00:23:28.118 Read Recovery Levels: Not Supported 00:23:28.118 Endurance Groups: Not Supported 00:23:28.118 Predictable Latency Mode: Not Supported 00:23:28.118 Traffic Based Keep ALive: Not Supported 00:23:28.118 Namespace Granularity: Not Supported 00:23:28.118 SQ Associations: Not Supported 00:23:28.118 UUID List: Not Supported 00:23:28.118 Multi-Domain Subsystem: Not Supported 00:23:28.118 Fixed Capacity Management: Not Supported 00:23:28.118 Variable Capacity Management: Not Supported 00:23:28.118 Delete Endurance Group: Not Supported 00:23:28.118 Delete NVM Set: Not Supported 00:23:28.118 Extended LBA Formats Supported: Not Supported 00:23:28.118 Flexible Data Placement Supported: Not Supported 00:23:28.118 00:23:28.118 Controller Memory Buffer Support 00:23:28.118 ================================ 00:23:28.118 Supported: No 00:23:28.118 00:23:28.118 Persistent Memory Region Support 00:23:28.118 ================================ 00:23:28.118 Supported: No 00:23:28.118 00:23:28.118 Admin Command Set Attributes 00:23:28.118 ============================ 00:23:28.118 Security Send/Receive: Not Supported 00:23:28.118 Format NVM: Not Supported 00:23:28.118 Firmware Activate/Download: Not Supported 00:23:28.118 Namespace Management: Not Supported 00:23:28.118 Device Self-Test: Not Supported 00:23:28.118 Directives: Not Supported 00:23:28.118 NVMe-MI: Not Supported 00:23:28.118 Virtualization Management: Not Supported 00:23:28.118 Doorbell Buffer Config: Not Supported 00:23:28.118 Get LBA Status Capability: Not Supported 00:23:28.118 Command & Feature Lockdown Capability: Not Supported 00:23:28.118 Abort Command Limit: 1 00:23:28.118 Async Event Request Limit: 4 00:23:28.118 Number of Firmware Slots: N/A 00:23:28.118 Firmware Slot 1 Read-Only: N/A 00:23:28.118 Firmware Activation Without Reset: N/A 00:23:28.118 Multiple Update Detection Support: N/A 00:23:28.118 Firmware Update Granularity: No Information Provided 00:23:28.118 Per-Namespace SMART Log: No 00:23:28.118 Asymmetric Namespace Access Log Page: Not Supported 00:23:28.118 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:28.118 Command Effects Log Page: Not Supported 00:23:28.118 Get Log Page Extended Data: Supported 00:23:28.118 Telemetry Log Pages: Not Supported 00:23:28.118 Persistent Event Log Pages: Not Supported 00:23:28.118 Supported Log Pages Log Page: May Support 00:23:28.118 Commands Supported & Effects Log Page: Not Supported 00:23:28.118 Feature Identifiers & Effects Log Page:May Support 00:23:28.118 NVMe-MI Commands & Effects Log Page: May Support 00:23:28.118 Data Area 4 for Telemetry Log: Not Supported 00:23:28.118 Error Log Page Entries Supported: 128 00:23:28.118 Keep Alive: Not Supported 00:23:28.118 00:23:28.118 NVM Command Set Attributes 00:23:28.118 ========================== 00:23:28.118 Submission Queue Entry Size 00:23:28.118 Max: 1 00:23:28.118 Min: 1 00:23:28.118 Completion Queue Entry Size 00:23:28.118 Max: 1 00:23:28.118 Min: 1 00:23:28.118 Number of Namespaces: 0 00:23:28.118 Compare Command: Not Supported 00:23:28.118 Write Uncorrectable Command: Not Supported 00:23:28.118 Dataset Management Command: Not Supported 00:23:28.118 Write Zeroes Command: Not Supported 00:23:28.118 Set Features Save Field: Not Supported 00:23:28.118 Reservations: Not Supported 00:23:28.118 Timestamp: Not Supported 00:23:28.118 Copy: Not Supported 00:23:28.118 Volatile Write Cache: Not Present 00:23:28.118 Atomic Write Unit (Normal): 1 00:23:28.118 Atomic Write Unit (PFail): 1 00:23:28.118 Atomic Compare & Write Unit: 1 00:23:28.118 Fused Compare & Write: Supported 00:23:28.118 Scatter-Gather List 00:23:28.118 SGL Command Set: Supported 00:23:28.118 SGL Keyed: Supported 00:23:28.118 SGL Bit Bucket Descriptor: Not Supported 00:23:28.118 SGL Metadata Pointer: Not Supported 00:23:28.118 Oversized SGL: Not Supported 00:23:28.118 SGL Metadata Address: Not Supported 00:23:28.118 SGL Offset: Supported 00:23:28.118 Transport SGL Data Block: Not Supported 00:23:28.118 Replay Protected Memory Block: Not Supported 00:23:28.118 00:23:28.118 Firmware Slot Information 00:23:28.118 ========================= 00:23:28.118 Active slot: 0 00:23:28.118 00:23:28.118 00:23:28.118 Error Log 00:23:28.118 ========= 00:23:28.118 00:23:28.118 Active Namespaces 00:23:28.118 ================= 00:23:28.118 Discovery Log Page 00:23:28.118 ================== 00:23:28.118 Generation Counter: 2 00:23:28.118 Number of Records: 2 00:23:28.118 Record Format: 0 00:23:28.118 00:23:28.118 Discovery Log Entry 0 00:23:28.118 ---------------------- 00:23:28.118 Transport Type: 3 (TCP) 00:23:28.118 Address Family: 1 (IPv4) 00:23:28.118 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:28.118 Entry Flags: 00:23:28.118 Duplicate Returned Information: 1 00:23:28.118 Explicit Persistent Connection Support for Discovery: 1 00:23:28.118 Transport Requirements: 00:23:28.118 Secure Channel: Not Required 00:23:28.118 Port ID: 0 (0x0000) 00:23:28.118 Controller ID: 65535 (0xffff) 00:23:28.118 Admin Max SQ Size: 128 00:23:28.118 Transport Service Identifier: 4420 00:23:28.118 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:28.118 Transport Address: 10.0.0.2 00:23:28.118 Discovery Log Entry 1 00:23:28.118 ---------------------- 00:23:28.118 Transport Type: 3 (TCP) 00:23:28.118 Address Family: 1 (IPv4) 00:23:28.118 Subsystem Type: 2 (NVM Subsystem) 00:23:28.118 Entry Flags: 00:23:28.118 Duplicate Returned Information: 0 00:23:28.118 Explicit Persistent Connection Support for Discovery: 0 00:23:28.118 Transport Requirements: 00:23:28.118 Secure Channel: Not Required 00:23:28.118 Port ID: 0 (0x0000) 00:23:28.118 Controller ID: 65535 (0xffff) 00:23:28.118 Admin Max SQ Size: 128 00:23:28.118 Transport Service Identifier: 4420 00:23:28.118 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:28.118 Transport Address: 10.0.0.2 [2024-12-06 15:40:33.876606] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:28.118 [2024-12-06 15:40:33.876616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225100) on tqpair=0x11c3690 00:23:28.118 [2024-12-06 15:40:33.876622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.118 [2024-12-06 15:40:33.876627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225280) on tqpair=0x11c3690 00:23:28.118 [2024-12-06 15:40:33.876631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.118 [2024-12-06 15:40:33.876635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225400) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.876639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.119 [2024-12-06 15:40:33.876644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.876648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.119 [2024-12-06 15:40:33.876657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.876671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.876686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.876747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.876753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.876758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.876769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.876781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.876794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.876860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.876866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.876870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.876878] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:28.119 [2024-12-06 15:40:33.876882] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:28.119 [2024-12-06 15:40:33.876890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.876902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.876913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.876971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.876976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.876979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.876991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.876998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.119 [2024-12-06 15:40:33.877737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.119 [2024-12-06 15:40:33.877746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.119 [2024-12-06 15:40:33.877813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.119 [2024-12-06 15:40:33.877818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.119 [2024-12-06 15:40:33.877821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.119 [2024-12-06 15:40:33.877832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.119 [2024-12-06 15:40:33.877836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.877839] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.877845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.877854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.877915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.877921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.877924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.877927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.877936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.877940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.877943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.877948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.877959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.878909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.878963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.878969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.878972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.120 [2024-12-06 15:40:33.878983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.120 [2024-12-06 15:40:33.878991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.120 [2024-12-06 15:40:33.878996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.120 [2024-12-06 15:40:33.879005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.120 [2024-12-06 15:40:33.879073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.120 [2024-12-06 15:40:33.879079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.120 [2024-12-06 15:40:33.879081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.121 [2024-12-06 15:40:33.879093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879099] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.121 [2024-12-06 15:40:33.879105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.879114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.121 [2024-12-06 15:40:33.879173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.879179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.879182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.121 [2024-12-06 15:40:33.879194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.121 [2024-12-06 15:40:33.879206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.879215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.121 [2024-12-06 15:40:33.879272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.879278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.879281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.121 [2024-12-06 15:40:33.879292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.879300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.121 [2024-12-06 15:40:33.879306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.879315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.121 [2024-12-06 15:40:33.883379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.883390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.883393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.883396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.121 [2024-12-06 15:40:33.883407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.883411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.883414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11c3690) 00:23:28.121 [2024-12-06 15:40:33.883420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.883433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1225580, cid 3, qid 0 00:23:28.121 [2024-12-06 15:40:33.883557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.883563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.883566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.883569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1225580) on tqpair=0x11c3690 00:23:28.121 [2024-12-06 15:40:33.883576] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:23:28.121 00:23:28.121 15:40:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:28.121 [2024-12-06 15:40:33.920434] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:23:28.121 [2024-12-06 15:40:33.920472] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091948 ] 00:23:28.121 [2024-12-06 15:40:33.961587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:28.121 [2024-12-06 15:40:33.961627] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:28.121 [2024-12-06 15:40:33.961633] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:28.121 [2024-12-06 15:40:33.961644] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:28.121 [2024-12-06 15:40:33.961652] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:28.121 [2024-12-06 15:40:33.961957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:28.121 [2024-12-06 15:40:33.961984] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x74f690 0 00:23:28.121 [2024-12-06 15:40:33.979381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:28.121 [2024-12-06 15:40:33.979395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:28.121 [2024-12-06 15:40:33.979402] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:28.121 [2024-12-06 15:40:33.979405] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:28.121 [2024-12-06 15:40:33.979434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.979439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.979443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.121 [2024-12-06 15:40:33.979452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:28.121 [2024-12-06 15:40:33.979470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.121 [2024-12-06 15:40:33.987379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.987387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.987391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.121 [2024-12-06 15:40:33.987405] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:28.121 [2024-12-06 15:40:33.987411] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:28.121 [2024-12-06 15:40:33.987416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:28.121 [2024-12-06 15:40:33.987426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.121 [2024-12-06 15:40:33.987440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.987453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.121 [2024-12-06 15:40:33.987565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.987571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.987574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.121 [2024-12-06 15:40:33.987582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:28.121 [2024-12-06 15:40:33.987589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:28.121 [2024-12-06 15:40:33.987595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.121 [2024-12-06 15:40:33.987607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.987618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.121 [2024-12-06 15:40:33.987676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.987682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.987685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.121 [2024-12-06 15:40:33.987693] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:28.121 [2024-12-06 15:40:33.987700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:28.121 [2024-12-06 15:40:33.987708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.121 [2024-12-06 15:40:33.987720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.987730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.121 [2024-12-06 15:40:33.987813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.121 [2024-12-06 15:40:33.987818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.121 [2024-12-06 15:40:33.987822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.121 [2024-12-06 15:40:33.987829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:28.121 [2024-12-06 15:40:33.987837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.121 [2024-12-06 15:40:33.987844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.121 [2024-12-06 15:40:33.987850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.121 [2024-12-06 15:40:33.987859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.122 [2024-12-06 15:40:33.987964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.122 [2024-12-06 15:40:33.987970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.122 [2024-12-06 15:40:33.987973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.987976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.122 [2024-12-06 15:40:33.987980] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:28.122 [2024-12-06 15:40:33.987984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:28.122 [2024-12-06 15:40:33.987991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:28.122 [2024-12-06 15:40:33.988098] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:28.122 [2024-12-06 15:40:33.988103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:28.122 [2024-12-06 15:40:33.988109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.122 [2024-12-06 15:40:33.988131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.122 [2024-12-06 15:40:33.988195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.122 [2024-12-06 15:40:33.988201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.122 [2024-12-06 15:40:33.988204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.122 [2024-12-06 15:40:33.988215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:28.122 [2024-12-06 15:40:33.988223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.122 [2024-12-06 15:40:33.988246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.122 [2024-12-06 15:40:33.988304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.122 [2024-12-06 15:40:33.988309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.122 [2024-12-06 15:40:33.988312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.122 [2024-12-06 15:40:33.988319] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:28.122 [2024-12-06 15:40:33.988324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:28.122 [2024-12-06 15:40:33.988330] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:28.122 [2024-12-06 15:40:33.988341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:28.122 [2024-12-06 15:40:33.988350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.122 [2024-12-06 15:40:33.988376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.122 [2024-12-06 15:40:33.988505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.122 [2024-12-06 15:40:33.988511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.122 [2024-12-06 15:40:33.988514] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988517] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=4096, cccid=0 00:23:28.122 [2024-12-06 15:40:33.988521] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1100) on tqpair(0x74f690): expected_datao=0, payload_size=4096 00:23:28.122 [2024-12-06 15:40:33.988525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988531] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988535] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.122 [2024-12-06 15:40:33.988549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.122 [2024-12-06 15:40:33.988552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.122 [2024-12-06 15:40:33.988562] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:28.122 [2024-12-06 15:40:33.988568] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:28.122 [2024-12-06 15:40:33.988573] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:28.122 [2024-12-06 15:40:33.988576] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:28.122 [2024-12-06 15:40:33.988582] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:28.122 [2024-12-06 15:40:33.988586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:28.122 [2024-12-06 15:40:33.988593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:28.122 [2024-12-06 15:40:33.988599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.122 [2024-12-06 15:40:33.988621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.122 [2024-12-06 15:40:33.988701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.122 [2024-12-06 15:40:33.988706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.122 [2024-12-06 15:40:33.988709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.122 [2024-12-06 15:40:33.988718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.122 [2024-12-06 15:40:33.988735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.122 [2024-12-06 15:40:33.988751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.122 [2024-12-06 15:40:33.988768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74f690) 00:23:28.122 [2024-12-06 15:40:33.988779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.122 [2024-12-06 15:40:33.988783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:28.122 [2024-12-06 15:40:33.988793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:28.122 [2024-12-06 15:40:33.988799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.122 [2024-12-06 15:40:33.988802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74f690) 00:23:28.123 [2024-12-06 15:40:33.988807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.123 [2024-12-06 15:40:33.988820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1100, cid 0, qid 0 00:23:28.123 [2024-12-06 15:40:33.988825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1280, cid 1, qid 0 00:23:28.123 [2024-12-06 15:40:33.988829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1400, cid 2, qid 0 00:23:28.123 [2024-12-06 15:40:33.988833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1580, cid 3, qid 0 00:23:28.123 [2024-12-06 15:40:33.988837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1700, cid 4, qid 0 00:23:28.123 [2024-12-06 15:40:33.988953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.123 [2024-12-06 15:40:33.988959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.123 [2024-12-06 15:40:33.988962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.988965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1700) on tqpair=0x74f690 00:23:28.123 [2024-12-06 15:40:33.988969] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:28.123 [2024-12-06 15:40:33.988973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.988981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.988986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.988992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.988995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.988998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74f690) 00:23:28.123 [2024-12-06 15:40:33.989004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.123 [2024-12-06 15:40:33.989014] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1700, cid 4, qid 0 00:23:28.123 [2024-12-06 15:40:33.989103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.123 [2024-12-06 15:40:33.989109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.123 [2024-12-06 15:40:33.989112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1700) on tqpair=0x74f690 00:23:28.123 [2024-12-06 15:40:33.989166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74f690) 00:23:28.123 [2024-12-06 15:40:33.989191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.123 [2024-12-06 15:40:33.989201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1700, cid 4, qid 0 00:23:28.123 [2024-12-06 15:40:33.989265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.123 [2024-12-06 15:40:33.989271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.123 [2024-12-06 15:40:33.989274] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989277] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=4096, cccid=4 00:23:28.123 [2024-12-06 15:40:33.989282] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1700) on tqpair(0x74f690): expected_datao=0, payload_size=4096 00:23:28.123 [2024-12-06 15:40:33.989286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989307] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989311] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.123 [2024-12-06 15:40:33.989360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.123 [2024-12-06 15:40:33.989363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1700) on tqpair=0x74f690 00:23:28.123 [2024-12-06 15:40:33.989383] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:28.123 [2024-12-06 15:40:33.989392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74f690) 00:23:28.123 [2024-12-06 15:40:33.989415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.123 [2024-12-06 15:40:33.989425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1700, cid 4, qid 0 00:23:28.123 [2024-12-06 15:40:33.989498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.123 [2024-12-06 15:40:33.989503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.123 [2024-12-06 15:40:33.989506] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989509] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=4096, cccid=4 00:23:28.123 [2024-12-06 15:40:33.989513] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1700) on tqpair(0x74f690): expected_datao=0, payload_size=4096 00:23:28.123 [2024-12-06 15:40:33.989517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989528] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989531] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.123 [2024-12-06 15:40:33.989563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.123 [2024-12-06 15:40:33.989566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1700) on tqpair=0x74f690 00:23:28.123 [2024-12-06 15:40:33.989580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74f690) 00:23:28.123 [2024-12-06 15:40:33.989604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.123 [2024-12-06 15:40:33.989615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1700, cid 4, qid 0 00:23:28.123 [2024-12-06 15:40:33.989679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.123 [2024-12-06 15:40:33.989687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.123 [2024-12-06 15:40:33.989690] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989693] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=4096, cccid=4 00:23:28.123 [2024-12-06 15:40:33.989697] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1700) on tqpair(0x74f690): expected_datao=0, payload_size=4096 00:23:28.123 [2024-12-06 15:40:33.989700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989719] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.123 [2024-12-06 15:40:33.989764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.123 [2024-12-06 15:40:33.989767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1700) on tqpair=0x74f690 00:23:28.123 [2024-12-06 15:40:33.989777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989813] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:28.123 [2024-12-06 15:40:33.989818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:28.123 [2024-12-06 15:40:33.989822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:28.123 [2024-12-06 15:40:33.989834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74f690) 00:23:28.123 [2024-12-06 15:40:33.989844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.123 [2024-12-06 15:40:33.989849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.123 [2024-12-06 15:40:33.989856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74f690) 00:23:28.123 [2024-12-06 15:40:33.989861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.123 [2024-12-06 15:40:33.989873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1700, cid 4, qid 0 00:23:28.123 [2024-12-06 15:40:33.989878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1880, cid 5, qid 0 00:23:28.123 [2024-12-06 15:40:33.989950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.123 [2024-12-06 15:40:33.989956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.123 [2024-12-06 15:40:33.989959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.989965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1700) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.989971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.989976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.989979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.989982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1880) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.989990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.989994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74f690) 00:23:28.124 [2024-12-06 15:40:33.989999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.124 [2024-12-06 15:40:33.990008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1880, cid 5, qid 0 00:23:28.124 [2024-12-06 15:40:33.990093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.990099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.990102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1880) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.990113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74f690) 00:23:28.124 [2024-12-06 15:40:33.990122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.124 [2024-12-06 15:40:33.990131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1880, cid 5, qid 0 00:23:28.124 [2024-12-06 15:40:33.990194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.990200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.990203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1880) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.990214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74f690) 00:23:28.124 [2024-12-06 15:40:33.990223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.124 [2024-12-06 15:40:33.990231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1880, cid 5, qid 0 00:23:28.124 [2024-12-06 15:40:33.990294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.990300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.990303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1880) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.990318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x74f690) 00:23:28.124 [2024-12-06 15:40:33.990328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.124 [2024-12-06 15:40:33.990334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x74f690) 00:23:28.124 [2024-12-06 15:40:33.990342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.124 [2024-12-06 15:40:33.990350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x74f690) 00:23:28.124 [2024-12-06 15:40:33.990358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.124 [2024-12-06 15:40:33.990365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74f690) 00:23:28.124 [2024-12-06 15:40:33.990381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.124 [2024-12-06 15:40:33.990392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1880, cid 5, qid 0 00:23:28.124 [2024-12-06 15:40:33.990397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1700, cid 4, qid 0 00:23:28.124 [2024-12-06 15:40:33.990401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1a00, cid 6, qid 0 00:23:28.124 [2024-12-06 15:40:33.990405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1b80, cid 7, qid 0 00:23:28.124 [2024-12-06 15:40:33.990535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.124 [2024-12-06 15:40:33.990540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.124 [2024-12-06 15:40:33.990544] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990547] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=8192, cccid=5 00:23:28.124 [2024-12-06 15:40:33.990551] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1880) on tqpair(0x74f690): expected_datao=0, payload_size=8192 00:23:28.124 [2024-12-06 15:40:33.990554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990574] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990579] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.124 [2024-12-06 15:40:33.990588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.124 [2024-12-06 15:40:33.990591] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990594] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=512, cccid=4 00:23:28.124 [2024-12-06 15:40:33.990598] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1700) on tqpair(0x74f690): expected_datao=0, payload_size=512 00:23:28.124 [2024-12-06 15:40:33.990602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990607] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990611] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.124 [2024-12-06 15:40:33.990620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.124 [2024-12-06 15:40:33.990623] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990626] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=512, cccid=6 00:23:28.124 [2024-12-06 15:40:33.990630] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1a00) on tqpair(0x74f690): expected_datao=0, payload_size=512 00:23:28.124 [2024-12-06 15:40:33.990633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990639] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990642] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.124 [2024-12-06 15:40:33.990651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.124 [2024-12-06 15:40:33.990656] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990659] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x74f690): datao=0, datal=4096, cccid=7 00:23:28.124 [2024-12-06 15:40:33.990663] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7b1b80) on tqpair(0x74f690): expected_datao=0, payload_size=4096 00:23:28.124 [2024-12-06 15:40:33.990667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990672] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990675] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.990688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.990691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1880) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.990704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.990709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.990712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1700) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.990723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.990728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.990731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1a00) on tqpair=0x74f690 00:23:28.124 [2024-12-06 15:40:33.990740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.124 [2024-12-06 15:40:33.990745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.124 [2024-12-06 15:40:33.990748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.124 [2024-12-06 15:40:33.990751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1b80) on tqpair=0x74f690 00:23:28.124 ===================================================== 00:23:28.124 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.124 ===================================================== 00:23:28.124 Controller Capabilities/Features 00:23:28.124 ================================ 00:23:28.124 Vendor ID: 8086 00:23:28.124 Subsystem Vendor ID: 8086 00:23:28.124 Serial Number: SPDK00000000000001 00:23:28.124 Model Number: SPDK bdev Controller 00:23:28.124 Firmware Version: 25.01 00:23:28.124 Recommended Arb Burst: 6 00:23:28.124 IEEE OUI Identifier: e4 d2 5c 00:23:28.124 Multi-path I/O 00:23:28.124 May have multiple subsystem ports: Yes 00:23:28.124 May have multiple controllers: Yes 00:23:28.124 Associated with SR-IOV VF: No 00:23:28.124 Max Data Transfer Size: 131072 00:23:28.124 Max Number of Namespaces: 32 00:23:28.124 Max Number of I/O Queues: 127 00:23:28.124 NVMe Specification Version (VS): 1.3 00:23:28.124 NVMe Specification Version (Identify): 1.3 00:23:28.124 Maximum Queue Entries: 128 00:23:28.124 Contiguous Queues Required: Yes 00:23:28.124 Arbitration Mechanisms Supported 00:23:28.125 Weighted Round Robin: Not Supported 00:23:28.125 Vendor Specific: Not Supported 00:23:28.125 Reset Timeout: 15000 ms 00:23:28.125 Doorbell Stride: 4 bytes 00:23:28.125 NVM Subsystem Reset: Not Supported 00:23:28.125 Command Sets Supported 00:23:28.125 NVM Command Set: Supported 00:23:28.125 Boot Partition: Not Supported 00:23:28.125 Memory Page Size Minimum: 4096 bytes 00:23:28.125 Memory Page Size Maximum: 4096 bytes 00:23:28.125 Persistent Memory Region: Not Supported 00:23:28.125 Optional Asynchronous Events Supported 00:23:28.125 Namespace Attribute Notices: Supported 00:23:28.125 Firmware Activation Notices: Not Supported 00:23:28.125 ANA Change Notices: Not Supported 00:23:28.125 PLE Aggregate Log Change Notices: Not Supported 00:23:28.125 LBA Status Info Alert Notices: Not Supported 00:23:28.125 EGE Aggregate Log Change Notices: Not Supported 00:23:28.125 Normal NVM Subsystem Shutdown event: Not Supported 00:23:28.125 Zone Descriptor Change Notices: Not Supported 00:23:28.125 Discovery Log Change Notices: Not Supported 00:23:28.125 Controller Attributes 00:23:28.125 128-bit Host Identifier: Supported 00:23:28.125 Non-Operational Permissive Mode: Not Supported 00:23:28.125 NVM Sets: Not Supported 00:23:28.125 Read Recovery Levels: Not Supported 00:23:28.125 Endurance Groups: Not Supported 00:23:28.125 Predictable Latency Mode: Not Supported 00:23:28.125 Traffic Based Keep ALive: Not Supported 00:23:28.125 Namespace Granularity: Not Supported 00:23:28.125 SQ Associations: Not Supported 00:23:28.125 UUID List: Not Supported 00:23:28.125 Multi-Domain Subsystem: Not Supported 00:23:28.125 Fixed Capacity Management: Not Supported 00:23:28.125 Variable Capacity Management: Not Supported 00:23:28.125 Delete Endurance Group: Not Supported 00:23:28.125 Delete NVM Set: Not Supported 00:23:28.125 Extended LBA Formats Supported: Not Supported 00:23:28.125 Flexible Data Placement Supported: Not Supported 00:23:28.125 00:23:28.125 Controller Memory Buffer Support 00:23:28.125 ================================ 00:23:28.125 Supported: No 00:23:28.125 00:23:28.125 Persistent Memory Region Support 00:23:28.125 ================================ 00:23:28.125 Supported: No 00:23:28.125 00:23:28.125 Admin Command Set Attributes 00:23:28.125 ============================ 00:23:28.125 Security Send/Receive: Not Supported 00:23:28.125 Format NVM: Not Supported 00:23:28.125 Firmware Activate/Download: Not Supported 00:23:28.125 Namespace Management: Not Supported 00:23:28.125 Device Self-Test: Not Supported 00:23:28.125 Directives: Not Supported 00:23:28.125 NVMe-MI: Not Supported 00:23:28.125 Virtualization Management: Not Supported 00:23:28.125 Doorbell Buffer Config: Not Supported 00:23:28.125 Get LBA Status Capability: Not Supported 00:23:28.125 Command & Feature Lockdown Capability: Not Supported 00:23:28.125 Abort Command Limit: 4 00:23:28.125 Async Event Request Limit: 4 00:23:28.125 Number of Firmware Slots: N/A 00:23:28.125 Firmware Slot 1 Read-Only: N/A 00:23:28.125 Firmware Activation Without Reset: N/A 00:23:28.125 Multiple Update Detection Support: N/A 00:23:28.125 Firmware Update Granularity: No Information Provided 00:23:28.125 Per-Namespace SMART Log: No 00:23:28.125 Asymmetric Namespace Access Log Page: Not Supported 00:23:28.125 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:28.125 Command Effects Log Page: Supported 00:23:28.125 Get Log Page Extended Data: Supported 00:23:28.125 Telemetry Log Pages: Not Supported 00:23:28.125 Persistent Event Log Pages: Not Supported 00:23:28.125 Supported Log Pages Log Page: May Support 00:23:28.125 Commands Supported & Effects Log Page: Not Supported 00:23:28.125 Feature Identifiers & Effects Log Page:May Support 00:23:28.125 NVMe-MI Commands & Effects Log Page: May Support 00:23:28.125 Data Area 4 for Telemetry Log: Not Supported 00:23:28.125 Error Log Page Entries Supported: 128 00:23:28.125 Keep Alive: Supported 00:23:28.125 Keep Alive Granularity: 10000 ms 00:23:28.125 00:23:28.125 NVM Command Set Attributes 00:23:28.125 ========================== 00:23:28.125 Submission Queue Entry Size 00:23:28.125 Max: 64 00:23:28.125 Min: 64 00:23:28.125 Completion Queue Entry Size 00:23:28.125 Max: 16 00:23:28.125 Min: 16 00:23:28.125 Number of Namespaces: 32 00:23:28.125 Compare Command: Supported 00:23:28.125 Write Uncorrectable Command: Not Supported 00:23:28.125 Dataset Management Command: Supported 00:23:28.125 Write Zeroes Command: Supported 00:23:28.125 Set Features Save Field: Not Supported 00:23:28.125 Reservations: Supported 00:23:28.125 Timestamp: Not Supported 00:23:28.125 Copy: Supported 00:23:28.125 Volatile Write Cache: Present 00:23:28.125 Atomic Write Unit (Normal): 1 00:23:28.125 Atomic Write Unit (PFail): 1 00:23:28.125 Atomic Compare & Write Unit: 1 00:23:28.125 Fused Compare & Write: Supported 00:23:28.125 Scatter-Gather List 00:23:28.125 SGL Command Set: Supported 00:23:28.125 SGL Keyed: Supported 00:23:28.125 SGL Bit Bucket Descriptor: Not Supported 00:23:28.125 SGL Metadata Pointer: Not Supported 00:23:28.125 Oversized SGL: Not Supported 00:23:28.125 SGL Metadata Address: Not Supported 00:23:28.125 SGL Offset: Supported 00:23:28.125 Transport SGL Data Block: Not Supported 00:23:28.125 Replay Protected Memory Block: Not Supported 00:23:28.125 00:23:28.125 Firmware Slot Information 00:23:28.125 ========================= 00:23:28.125 Active slot: 1 00:23:28.125 Slot 1 Firmware Revision: 25.01 00:23:28.125 00:23:28.125 00:23:28.125 Commands Supported and Effects 00:23:28.125 ============================== 00:23:28.125 Admin Commands 00:23:28.125 -------------- 00:23:28.125 Get Log Page (02h): Supported 00:23:28.125 Identify (06h): Supported 00:23:28.125 Abort (08h): Supported 00:23:28.125 Set Features (09h): Supported 00:23:28.125 Get Features (0Ah): Supported 00:23:28.125 Asynchronous Event Request (0Ch): Supported 00:23:28.125 Keep Alive (18h): Supported 00:23:28.125 I/O Commands 00:23:28.125 ------------ 00:23:28.125 Flush (00h): Supported LBA-Change 00:23:28.125 Write (01h): Supported LBA-Change 00:23:28.125 Read (02h): Supported 00:23:28.125 Compare (05h): Supported 00:23:28.125 Write Zeroes (08h): Supported LBA-Change 00:23:28.125 Dataset Management (09h): Supported LBA-Change 00:23:28.125 Copy (19h): Supported LBA-Change 00:23:28.125 00:23:28.125 Error Log 00:23:28.125 ========= 00:23:28.125 00:23:28.125 Arbitration 00:23:28.125 =========== 00:23:28.125 Arbitration Burst: 1 00:23:28.125 00:23:28.125 Power Management 00:23:28.125 ================ 00:23:28.125 Number of Power States: 1 00:23:28.125 Current Power State: Power State #0 00:23:28.125 Power State #0: 00:23:28.125 Max Power: 0.00 W 00:23:28.125 Non-Operational State: Operational 00:23:28.125 Entry Latency: Not Reported 00:23:28.125 Exit Latency: Not Reported 00:23:28.125 Relative Read Throughput: 0 00:23:28.125 Relative Read Latency: 0 00:23:28.125 Relative Write Throughput: 0 00:23:28.125 Relative Write Latency: 0 00:23:28.125 Idle Power: Not Reported 00:23:28.125 Active Power: Not Reported 00:23:28.125 Non-Operational Permissive Mode: Not Supported 00:23:28.125 00:23:28.125 Health Information 00:23:28.125 ================== 00:23:28.125 Critical Warnings: 00:23:28.125 Available Spare Space: OK 00:23:28.125 Temperature: OK 00:23:28.125 Device Reliability: OK 00:23:28.125 Read Only: No 00:23:28.125 Volatile Memory Backup: OK 00:23:28.125 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:28.125 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:28.125 Available Spare: 0% 00:23:28.125 Available Spare Threshold: 0% 00:23:28.125 Life Percentage Used:[2024-12-06 15:40:33.990828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.125 [2024-12-06 15:40:33.990833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x74f690) 00:23:28.125 [2024-12-06 15:40:33.990839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.125 [2024-12-06 15:40:33.990850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1b80, cid 7, qid 0 00:23:28.125 [2024-12-06 15:40:33.990917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.125 [2024-12-06 15:40:33.990923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.125 [2024-12-06 15:40:33.990926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.125 [2024-12-06 15:40:33.990929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1b80) on tqpair=0x74f690 00:23:28.125 [2024-12-06 15:40:33.990959] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:28.125 [2024-12-06 15:40:33.990968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1100) on tqpair=0x74f690 00:23:28.125 [2024-12-06 15:40:33.990974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.125 [2024-12-06 15:40:33.990978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1280) on tqpair=0x74f690 00:23:28.125 [2024-12-06 15:40:33.990982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.126 [2024-12-06 15:40:33.990986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1400) on tqpair=0x74f690 00:23:28.126 [2024-12-06 15:40:33.990990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.126 [2024-12-06 15:40:33.990996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1580) on tqpair=0x74f690 00:23:28.126 [2024-12-06 15:40:33.991000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.126 [2024-12-06 15:40:33.991007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74f690) 00:23:28.126 [2024-12-06 15:40:33.991019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.126 [2024-12-06 15:40:33.991029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1580, cid 3, qid 0 00:23:28.126 [2024-12-06 15:40:33.991087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.126 [2024-12-06 15:40:33.991093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.126 [2024-12-06 15:40:33.991096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1580) on tqpair=0x74f690 00:23:28.126 [2024-12-06 15:40:33.991105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74f690) 00:23:28.126 [2024-12-06 15:40:33.991117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.126 [2024-12-06 15:40:33.991129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1580, cid 3, qid 0 00:23:28.126 [2024-12-06 15:40:33.991217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.126 [2024-12-06 15:40:33.991223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.126 [2024-12-06 15:40:33.991226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1580) on tqpair=0x74f690 00:23:28.126 [2024-12-06 15:40:33.991233] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:28.126 [2024-12-06 15:40:33.991237] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:28.126 [2024-12-06 15:40:33.991245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.991252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74f690) 00:23:28.126 [2024-12-06 15:40:33.991257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.126 [2024-12-06 15:40:33.991266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1580, cid 3, qid 0 00:23:28.126 [2024-12-06 15:40:33.995377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.126 [2024-12-06 15:40:33.995386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.126 [2024-12-06 15:40:33.995389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.995392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1580) on tqpair=0x74f690 00:23:28.126 [2024-12-06 15:40:33.995403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.995407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.995410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x74f690) 00:23:28.126 [2024-12-06 15:40:33.995416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.126 [2024-12-06 15:40:33.995430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7b1580, cid 3, qid 0 00:23:28.126 [2024-12-06 15:40:33.995511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.126 [2024-12-06 15:40:33.995517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.126 [2024-12-06 15:40:33.995520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.126 [2024-12-06 15:40:33.995523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7b1580) on tqpair=0x74f690 00:23:28.126 [2024-12-06 15:40:33.995529] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:28.126 0% 00:23:28.126 Data Units Read: 0 00:23:28.126 Data Units Written: 0 00:23:28.126 Host Read Commands: 0 00:23:28.126 Host Write Commands: 0 00:23:28.126 Controller Busy Time: 0 minutes 00:23:28.126 Power Cycles: 0 00:23:28.126 Power On Hours: 0 hours 00:23:28.126 Unsafe Shutdowns: 0 00:23:28.126 Unrecoverable Media Errors: 0 00:23:28.126 Lifetime Error Log Entries: 0 00:23:28.126 Warning Temperature Time: 0 minutes 00:23:28.126 Critical Temperature Time: 0 minutes 00:23:28.126 00:23:28.126 Number of Queues 00:23:28.126 ================ 00:23:28.126 Number of I/O Submission Queues: 127 00:23:28.126 Number of I/O Completion Queues: 127 00:23:28.126 00:23:28.126 Active Namespaces 00:23:28.126 ================= 00:23:28.126 Namespace ID:1 00:23:28.126 Error Recovery Timeout: Unlimited 00:23:28.126 Command Set Identifier: NVM (00h) 00:23:28.126 Deallocate: Supported 00:23:28.126 Deallocated/Unwritten Error: Not Supported 00:23:28.126 Deallocated Read Value: Unknown 00:23:28.126 Deallocate in Write Zeroes: Not Supported 00:23:28.126 Deallocated Guard Field: 0xFFFF 00:23:28.126 Flush: Supported 00:23:28.126 Reservation: Supported 00:23:28.126 Namespace Sharing Capabilities: Multiple Controllers 00:23:28.126 Size (in LBAs): 131072 (0GiB) 00:23:28.126 Capacity (in LBAs): 131072 (0GiB) 00:23:28.126 Utilization (in LBAs): 131072 (0GiB) 00:23:28.126 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:28.126 EUI64: ABCDEF0123456789 00:23:28.126 UUID: 8e6f86fc-19c8-4f8c-a027-e8f4dad93dc1 00:23:28.126 Thin Provisioning: Not Supported 00:23:28.126 Per-NS Atomic Units: Yes 00:23:28.126 Atomic Boundary Size (Normal): 0 00:23:28.126 Atomic Boundary Size (PFail): 0 00:23:28.126 Atomic Boundary Offset: 0 00:23:28.126 Maximum Single Source Range Length: 65535 00:23:28.126 Maximum Copy Length: 65535 00:23:28.126 Maximum Source Range Count: 1 00:23:28.126 NGUID/EUI64 Never Reused: No 00:23:28.126 Namespace Write Protected: No 00:23:28.126 Number of LBA Formats: 1 00:23:28.126 Current LBA Format: LBA Format #00 00:23:28.126 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:28.126 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.126 rmmod nvme_tcp 00:23:28.126 rmmod nvme_fabrics 00:23:28.126 rmmod nvme_keyring 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3091914 ']' 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3091914 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3091914 ']' 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3091914 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.126 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3091914 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3091914' 00:23:28.386 killing process with pid 3091914 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3091914 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3091914 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.386 15:40:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.920 00:23:30.920 real 0m9.339s 00:23:30.920 user 0m5.493s 00:23:30.920 sys 0m4.876s 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.920 ************************************ 00:23:30.920 END TEST nvmf_identify 00:23:30.920 ************************************ 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.920 ************************************ 00:23:30.920 START TEST nvmf_perf 00:23:30.920 ************************************ 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:30.920 * Looking for test storage... 00:23:30.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.920 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.921 --rc genhtml_branch_coverage=1 00:23:30.921 --rc genhtml_function_coverage=1 00:23:30.921 --rc genhtml_legend=1 00:23:30.921 --rc geninfo_all_blocks=1 00:23:30.921 --rc geninfo_unexecuted_blocks=1 00:23:30.921 00:23:30.921 ' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.921 --rc genhtml_branch_coverage=1 00:23:30.921 --rc genhtml_function_coverage=1 00:23:30.921 --rc genhtml_legend=1 00:23:30.921 --rc geninfo_all_blocks=1 00:23:30.921 --rc geninfo_unexecuted_blocks=1 00:23:30.921 00:23:30.921 ' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.921 --rc genhtml_branch_coverage=1 00:23:30.921 --rc genhtml_function_coverage=1 00:23:30.921 --rc genhtml_legend=1 00:23:30.921 --rc geninfo_all_blocks=1 00:23:30.921 --rc geninfo_unexecuted_blocks=1 00:23:30.921 00:23:30.921 ' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.921 --rc genhtml_branch_coverage=1 00:23:30.921 --rc genhtml_function_coverage=1 00:23:30.921 --rc genhtml_legend=1 00:23:30.921 --rc geninfo_all_blocks=1 00:23:30.921 --rc geninfo_unexecuted_blocks=1 00:23:30.921 00:23:30.921 ' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.921 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:30.921 15:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.494 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:37.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:37.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:37.495 Found net devices under 0000:86:00.0: cvl_0_0 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:37.495 Found net devices under 0000:86:00.1: cvl_0_1 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.495 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:23:37.495 00:23:37.495 --- 10.0.0.2 ping statistics --- 00:23:37.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.496 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:23:37.496 00:23:37.496 --- 10.0.0.1 ping statistics --- 00:23:37.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.496 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3095549 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3095549 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3095549 ']' 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.496 [2024-12-06 15:40:42.717480] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:23:37.496 [2024-12-06 15:40:42.717523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.496 [2024-12-06 15:40:42.795796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.496 [2024-12-06 15:40:42.839832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.496 [2024-12-06 15:40:42.839868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.496 [2024-12-06 15:40:42.839878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.496 [2024-12-06 15:40:42.839886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.496 [2024-12-06 15:40:42.839892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.496 [2024-12-06 15:40:42.841376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.496 [2024-12-06 15:40:42.841489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.496 [2024-12-06 15:40:42.841593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.496 [2024-12-06 15:40:42.841595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:37.496 15:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:40.028 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:40.028 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:40.286 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:40.286 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:40.544 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:40.544 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:40.544 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:40.544 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:40.544 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:40.802 [2024-12-06 15:40:46.600929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.802 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.060 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:41.060 15:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.060 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:41.060 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:41.319 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.576 [2024-12-06 15:40:47.415950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.576 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:41.834 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:41.834 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:41.834 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:41.834 15:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:43.286 Initializing NVMe Controllers 00:23:43.286 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:43.286 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:43.286 Initialization complete. Launching workers. 00:23:43.286 ======================================================== 00:23:43.286 Latency(us) 00:23:43.286 Device Information : IOPS MiB/s Average min max 00:23:43.286 PCIE (0000:5e:00.0) NSID 1 from core 0: 97812.05 382.08 326.78 24.54 8197.68 00:23:43.286 ======================================================== 00:23:43.286 Total : 97812.05 382.08 326.78 24.54 8197.68 00:23:43.286 00:23:43.286 15:40:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:44.749 Initializing NVMe Controllers 00:23:44.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:44.749 Initialization complete. Launching workers. 00:23:44.749 ======================================================== 00:23:44.749 Latency(us) 00:23:44.749 Device Information : IOPS MiB/s Average min max 00:23:44.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 112.00 0.44 9247.00 103.55 45614.07 00:23:44.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19681.35 6986.41 47897.16 00:23:44.749 ======================================================== 00:23:44.749 Total : 163.00 0.64 12511.74 103.55 47897.16 00:23:44.749 00:23:44.749 15:40:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.688 Initializing NVMe Controllers 00:23:45.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:45.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:45.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:45.688 Initialization complete. Launching workers. 00:23:45.688 ======================================================== 00:23:45.688 Latency(us) 00:23:45.688 Device Information : IOPS MiB/s Average min max 00:23:45.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11363.77 44.39 2824.56 414.02 6232.61 00:23:45.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3819.59 14.92 8411.26 5486.58 16009.66 00:23:45.688 ======================================================== 00:23:45.688 Total : 15183.36 59.31 4229.97 414.02 16009.66 00:23:45.688 00:23:45.688 15:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:45.688 15:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:45.688 15:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:48.224 Initializing NVMe Controllers 00:23:48.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.224 Controller IO queue size 128, less than required. 00:23:48.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.224 Controller IO queue size 128, less than required. 00:23:48.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:48.224 Initialization complete. Launching workers. 00:23:48.224 ======================================================== 00:23:48.224 Latency(us) 00:23:48.224 Device Information : IOPS MiB/s Average min max 00:23:48.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1868.98 467.25 69623.87 47974.76 130023.14 00:23:48.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.00 143.75 225938.85 65458.71 377877.56 00:23:48.224 ======================================================== 00:23:48.224 Total : 2443.98 610.99 106400.10 47974.76 377877.56 00:23:48.224 00:23:48.224 15:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:48.224 No valid NVMe controllers or AIO or URING devices found 00:23:48.224 Initializing NVMe Controllers 00:23:48.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.224 Controller IO queue size 128, less than required. 00:23:48.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.224 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:48.224 Controller IO queue size 128, less than required. 00:23:48.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.224 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:48.224 WARNING: Some requested NVMe devices were skipped 00:23:48.224 15:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:50.766 Initializing NVMe Controllers 00:23:50.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.766 Controller IO queue size 128, less than required. 00:23:50.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.766 Controller IO queue size 128, less than required. 00:23:50.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:50.766 Initialization complete. Launching workers. 00:23:50.766 00:23:50.766 ==================== 00:23:50.766 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:50.766 TCP transport: 00:23:50.766 polls: 15567 00:23:50.766 idle_polls: 12105 00:23:50.766 sock_completions: 3462 00:23:50.766 nvme_completions: 6213 00:23:50.766 submitted_requests: 9238 00:23:50.766 queued_requests: 1 00:23:50.766 00:23:50.766 ==================== 00:23:50.766 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:50.766 TCP transport: 00:23:50.766 polls: 15689 00:23:50.766 idle_polls: 11981 00:23:50.766 sock_completions: 3708 00:23:50.766 nvme_completions: 6579 00:23:50.766 submitted_requests: 9888 00:23:50.766 queued_requests: 1 00:23:50.766 ======================================================== 00:23:50.766 Latency(us) 00:23:50.766 Device Information : IOPS MiB/s Average min max 00:23:50.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1552.70 388.18 84311.98 47258.78 133160.15 00:23:50.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1644.18 411.05 78679.40 47151.66 128984.67 00:23:50.766 ======================================================== 00:23:50.766 Total : 3196.88 799.22 81415.10 47151.66 133160.15 00:23:50.766 00:23:50.766 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:50.766 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.025 rmmod nvme_tcp 00:23:51.025 rmmod nvme_fabrics 00:23:51.025 rmmod nvme_keyring 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3095549 ']' 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3095549 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3095549 ']' 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3095549 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3095549 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3095549' 00:23:51.025 killing process with pid 3095549 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3095549 00:23:51.025 15:40:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3095549 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.561 15:40:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.561 15:40:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.561 15:40:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.562 15:40:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.470 00:23:55.470 real 0m24.591s 00:23:55.470 user 1m4.228s 00:23:55.470 sys 0m8.252s 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:55.470 ************************************ 00:23:55.470 END TEST nvmf_perf 00:23:55.470 ************************************ 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.470 ************************************ 00:23:55.470 START TEST nvmf_fio_host 00:23:55.470 ************************************ 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:55.470 * Looking for test storage... 00:23:55.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.470 --rc genhtml_branch_coverage=1 00:23:55.470 --rc genhtml_function_coverage=1 00:23:55.470 --rc genhtml_legend=1 00:23:55.470 --rc geninfo_all_blocks=1 00:23:55.470 --rc geninfo_unexecuted_blocks=1 00:23:55.470 00:23:55.470 ' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.470 --rc genhtml_branch_coverage=1 00:23:55.470 --rc genhtml_function_coverage=1 00:23:55.470 --rc genhtml_legend=1 00:23:55.470 --rc geninfo_all_blocks=1 00:23:55.470 --rc geninfo_unexecuted_blocks=1 00:23:55.470 00:23:55.470 ' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.470 --rc genhtml_branch_coverage=1 00:23:55.470 --rc genhtml_function_coverage=1 00:23:55.470 --rc genhtml_legend=1 00:23:55.470 --rc geninfo_all_blocks=1 00:23:55.470 --rc geninfo_unexecuted_blocks=1 00:23:55.470 00:23:55.470 ' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.470 --rc genhtml_branch_coverage=1 00:23:55.470 --rc genhtml_function_coverage=1 00:23:55.470 --rc genhtml_legend=1 00:23:55.470 --rc geninfo_all_blocks=1 00:23:55.470 --rc geninfo_unexecuted_blocks=1 00:23:55.470 00:23:55.470 ' 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.470 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.471 15:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:02.041 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:02.041 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:02.041 Found net devices under 0000:86:00.0: cvl_0_0 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:02.041 Found net devices under 0000:86:00.1: cvl_0_1 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.041 15:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.041 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.041 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.041 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.041 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:24:02.042 00:24:02.042 --- 10.0.0.2 ping statistics --- 00:24:02.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.042 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:02.042 00:24:02.042 --- 10.0.0.1 ping statistics --- 00:24:02.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.042 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3101786 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3101786 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3101786 ']' 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.042 15:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.042 [2024-12-06 15:41:07.336455] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:24:02.042 [2024-12-06 15:41:07.336501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.042 [2024-12-06 15:41:07.414806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.042 [2024-12-06 15:41:07.454468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.042 [2024-12-06 15:41:07.454506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.042 [2024-12-06 15:41:07.454516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.042 [2024-12-06 15:41:07.454523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.042 [2024-12-06 15:41:07.454529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.042 [2024-12-06 15:41:07.456195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.042 [2024-12-06 15:41:07.456303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.042 [2024-12-06 15:41:07.456416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.042 [2024-12-06 15:41:07.456417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.300 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.300 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:02.300 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:02.559 [2024-12-06 15:41:08.341921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.559 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:02.559 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.559 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.559 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:02.817 Malloc1 00:24:02.817 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.076 15:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:03.076 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.334 [2024-12-06 15:41:09.198989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.334 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:03.593 15:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.852 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:03.852 fio-3.35 00:24:03.852 Starting 1 thread 00:24:06.383 00:24:06.383 test: (groupid=0, jobs=1): err= 0: pid=3102340: Fri Dec 6 15:41:12 2024 00:24:06.383 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(93.2MiB/2005msec) 00:24:06.383 slat (nsec): min=1536, max=253110, avg=1731.35, stdev=2267.37 00:24:06.383 clat (usec): min=3280, max=10533, avg=5937.56, stdev=460.44 00:24:06.383 lat (usec): min=3313, max=10534, avg=5939.29, stdev=460.44 00:24:06.383 clat percentiles (usec): 00:24:06.383 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5604], 00:24:06.383 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:24:06.383 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:24:06.383 | 99.00th=[ 7046], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[ 9241], 00:24:06.383 | 99.99th=[10421] 00:24:06.383 bw ( KiB/s): min=46400, max=48256, per=99.94%, avg=47590.00, stdev=848.19, samples=4 00:24:06.383 iops : min=11600, max=12064, avg=11897.50, stdev=212.05, samples=4 00:24:06.383 write: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2005msec); 0 zone resets 00:24:06.383 slat (nsec): min=1571, max=226806, avg=1783.16, stdev=1672.35 00:24:06.383 clat (usec): min=2493, max=9194, avg=4794.34, stdev=375.50 00:24:06.384 lat (usec): min=2508, max=9196, avg=4796.12, stdev=375.57 00:24:06.384 clat percentiles (usec): 00:24:06.384 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:24:06.384 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:24:06.384 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:24:06.384 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 7177], 99.95th=[ 7963], 00:24:06.384 | 99.99th=[ 9110] 00:24:06.384 bw ( KiB/s): min=46928, max=47936, per=100.00%, avg=47412.00, stdev=450.92, samples=4 00:24:06.384 iops : min=11732, max=11984, avg=11853.00, stdev=112.73, samples=4 00:24:06.384 lat (msec) : 4=0.71%, 10=99.28%, 20=0.01% 00:24:06.384 cpu : usr=72.60%, sys=25.75%, ctx=148, majf=0, minf=2 00:24:06.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:06.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:06.384 issued rwts: total=23868,23760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:06.384 00:24:06.384 Run status group 0 (all jobs): 00:24:06.384 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=93.2MiB (97.8MB), run=2005-2005msec 00:24:06.384 WRITE: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2005-2005msec 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:06.384 15:41:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:06.384 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:06.384 fio-3.35 00:24:06.384 Starting 1 thread 00:24:08.918 00:24:08.918 test: (groupid=0, jobs=1): err= 0: pid=3102912: Fri Dec 6 15:41:14 2024 00:24:08.918 read: IOPS=11.1k, BW=173MiB/s (182MB/s)(347MiB/2004msec) 00:24:08.918 slat (nsec): min=2464, max=81745, avg=2778.68, stdev=1156.04 00:24:08.918 clat (usec): min=1747, max=12352, avg=6667.79, stdev=1608.82 00:24:08.919 lat (usec): min=1750, max=12354, avg=6670.57, stdev=1608.89 00:24:08.919 clat percentiles (usec): 00:24:08.919 | 1.00th=[ 3458], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 5276], 00:24:08.919 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7111], 00:24:08.919 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9503], 00:24:08.919 | 99.00th=[11076], 99.50th=[11731], 99.90th=[11994], 99.95th=[12125], 00:24:08.919 | 99.99th=[12256] 00:24:08.919 bw ( KiB/s): min=76256, max=94880, per=49.62%, avg=88008.00, stdev=8109.73, samples=4 00:24:08.919 iops : min= 4766, max= 5930, avg=5500.50, stdev=506.86, samples=4 00:24:08.919 write: IOPS=6329, BW=98.9MiB/s (104MB/s)(180MiB/1818msec); 0 zone resets 00:24:08.919 slat (usec): min=28, max=347, avg=31.34, stdev= 6.28 00:24:08.919 clat (usec): min=1802, max=13989, avg=8479.68, stdev=1441.11 00:24:08.919 lat (usec): min=1832, max=14019, avg=8511.03, stdev=1441.81 00:24:08.919 clat percentiles (usec): 00:24:08.919 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7242], 00:24:08.919 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:24:08.919 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11076], 00:24:08.919 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12911], 99.95th=[13566], 00:24:08.919 | 99.99th=[13960] 00:24:08.919 bw ( KiB/s): min=82432, max=98656, per=90.66%, avg=91808.00, stdev=6801.44, samples=4 00:24:08.919 iops : min= 5152, max= 6166, avg=5738.00, stdev=425.09, samples=4 00:24:08.919 lat (msec) : 2=0.03%, 4=2.40%, 10=89.98%, 20=7.59% 00:24:08.919 cpu : usr=86.53%, sys=12.77%, ctx=42, majf=0, minf=2 00:24:08.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:08.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:08.919 issued rwts: total=22213,11507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:08.919 00:24:08.919 Run status group 0 (all jobs): 00:24:08.919 READ: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=347MiB (364MB), run=2004-2004msec 00:24:08.919 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=180MiB (189MB), run=1818-1818msec 00:24:08.919 15:41:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.178 rmmod nvme_tcp 00:24:09.178 rmmod nvme_fabrics 00:24:09.178 rmmod nvme_keyring 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3101786 ']' 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3101786 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3101786 ']' 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3101786 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.178 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3101786 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3101786' 00:24:09.437 killing process with pid 3101786 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3101786 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3101786 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.437 15:41:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:11.977 00:24:11.977 real 0m16.294s 00:24:11.977 user 0m48.887s 00:24:11.977 sys 0m6.484s 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.977 ************************************ 00:24:11.977 END TEST nvmf_fio_host 00:24:11.977 ************************************ 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.977 ************************************ 00:24:11.977 START TEST nvmf_failover 00:24:11.977 ************************************ 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:11.977 * Looking for test storage... 00:24:11.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.977 --rc genhtml_branch_coverage=1 00:24:11.977 --rc genhtml_function_coverage=1 00:24:11.977 --rc genhtml_legend=1 00:24:11.977 --rc geninfo_all_blocks=1 00:24:11.977 --rc geninfo_unexecuted_blocks=1 00:24:11.977 00:24:11.977 ' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.977 --rc genhtml_branch_coverage=1 00:24:11.977 --rc genhtml_function_coverage=1 00:24:11.977 --rc genhtml_legend=1 00:24:11.977 --rc geninfo_all_blocks=1 00:24:11.977 --rc geninfo_unexecuted_blocks=1 00:24:11.977 00:24:11.977 ' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.977 --rc genhtml_branch_coverage=1 00:24:11.977 --rc genhtml_function_coverage=1 00:24:11.977 --rc genhtml_legend=1 00:24:11.977 --rc geninfo_all_blocks=1 00:24:11.977 --rc geninfo_unexecuted_blocks=1 00:24:11.977 00:24:11.977 ' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.977 --rc genhtml_branch_coverage=1 00:24:11.977 --rc genhtml_function_coverage=1 00:24:11.977 --rc genhtml_legend=1 00:24:11.977 --rc geninfo_all_blocks=1 00:24:11.977 --rc geninfo_unexecuted_blocks=1 00:24:11.977 00:24:11.977 ' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.977 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.978 15:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:18.549 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:18.550 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:18.550 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:18.550 Found net devices under 0000:86:00.0: cvl_0_0 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:18.550 Found net devices under 0000:86:00.1: cvl_0_1 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:18.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:24:18.550 00:24:18.550 --- 10.0.0.2 ping statistics --- 00:24:18.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.550 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:18.550 00:24:18.550 --- 10.0.0.1 ping statistics --- 00:24:18.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.550 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.550 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3106724 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3106724 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3106724 ']' 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.551 15:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.551 [2024-12-06 15:41:23.720943] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:24:18.551 [2024-12-06 15:41:23.721001] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.551 [2024-12-06 15:41:23.801296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:18.551 [2024-12-06 15:41:23.844433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.551 [2024-12-06 15:41:23.844467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.551 [2024-12-06 15:41:23.844475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.551 [2024-12-06 15:41:23.844481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.551 [2024-12-06 15:41:23.844486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.551 [2024-12-06 15:41:23.845955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.551 [2024-12-06 15:41:23.846062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.551 [2024-12-06 15:41:23.846063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:18.810 [2024-12-06 15:41:24.750078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.810 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:19.069 Malloc0 00:24:19.069 15:41:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.328 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.586 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.586 [2024-12-06 15:41:25.566312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.586 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.844 [2024-12-06 15:41:25.754853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.844 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:20.102 [2024-12-06 15:41:25.951462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3107199 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3107199 /var/tmp/bdevperf.sock 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3107199 ']' 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.102 15:41:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.358 15:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.358 15:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:20.358 15:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:20.615 NVMe0n1 00:24:20.615 15:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:20.871 00:24:20.871 15:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3107275 00:24:20.871 15:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.871 15:41:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:21.802 15:41:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.058 [2024-12-06 15:41:27.983072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 [2024-12-06 15:41:27.983120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 [2024-12-06 15:41:27.983128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 [2024-12-06 15:41:27.983135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 [2024-12-06 15:41:27.983142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 [2024-12-06 15:41:27.983148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 [2024-12-06 15:41:27.983154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 [2024-12-06 15:41:27.983159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166fe20 is same with the state(6) to be set 00:24:22.059 15:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:25.331 15:41:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:25.588 00:24:25.588 15:41:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:25.846 [2024-12-06 15:41:31.658707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.658997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 [2024-12-06 15:41:31.659115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1670c40 is same with the state(6) to be set 00:24:25.846 15:41:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:29.126 15:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.126 [2024-12-06 15:41:34.872632] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.126 15:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:30.055 15:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:30.311 15:41:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3107275 00:24:36.872 { 00:24:36.872 "results": [ 00:24:36.872 { 00:24:36.872 "job": "NVMe0n1", 00:24:36.872 "core_mask": "0x1", 00:24:36.872 "workload": "verify", 00:24:36.872 "status": "finished", 00:24:36.872 "verify_range": { 00:24:36.872 "start": 0, 00:24:36.872 "length": 16384 00:24:36.872 }, 00:24:36.872 "queue_depth": 128, 00:24:36.872 "io_size": 4096, 00:24:36.872 "runtime": 15.008712, 00:24:36.872 "iops": 11419.234375341468, 00:24:36.872 "mibps": 44.60638427867761, 00:24:36.872 "io_failed": 4173, 00:24:36.872 "io_timeout": 0, 00:24:36.872 "avg_latency_us": 10920.874079713441, 00:24:36.872 "min_latency_us": 431.0552380952381, 00:24:36.872 "max_latency_us": 15291.733333333334 00:24:36.872 } 00:24:36.872 ], 00:24:36.872 "core_count": 1 00:24:36.872 } 00:24:36.872 15:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3107199 00:24:36.872 15:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3107199 ']' 00:24:36.872 15:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3107199 00:24:36.872 15:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:36.872 15:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.872 15:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3107199 00:24:36.872 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.872 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.872 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3107199' 00:24:36.872 killing process with pid 3107199 00:24:36.872 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3107199 00:24:36.872 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3107199 00:24:36.872 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:36.872 [2024-12-06 15:41:26.024985] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:24:36.872 [2024-12-06 15:41:26.025036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3107199 ] 00:24:36.873 [2024-12-06 15:41:26.101068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.873 [2024-12-06 15:41:26.141920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.873 Running I/O for 15 seconds... 00:24:36.873 11369.00 IOPS, 44.41 MiB/s [2024-12-06T14:41:42.871Z] [2024-12-06 15:41:27.983385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.873 [2024-12-06 15:41:27.983973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.873 [2024-12-06 15:41:27.983979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.983988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.983994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.874 [2024-12-06 15:41:27.984466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.874 [2024-12-06 15:41:27.984546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.874 [2024-12-06 15:41:27.984552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.984985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.984992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.875 [2024-12-06 15:41:27.985134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.875 [2024-12-06 15:41:27.985140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:27.985155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:27.985169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.876 [2024-12-06 15:41:27.985284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:36.876 [2024-12-06 15:41:27.985308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.876 [2024-12-06 15:41:27.985315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101240 len:8 PRP1 0x0 PRP2 0x0 00:24:36.876 [2024-12-06 15:41:27.985322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985371] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:36.876 [2024-12-06 15:41:27.985395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.876 [2024-12-06 15:41:27.985402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.876 [2024-12-06 15:41:27.985416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.876 [2024-12-06 15:41:27.985430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.876 [2024-12-06 15:41:27.985443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:27.985450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:36.876 [2024-12-06 15:41:27.988233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:36.876 [2024-12-06 15:41:27.988261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefa0 (9): Bad file descriptor 00:24:36.876 [2024-12-06 15:41:28.011405] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:36.876 11289.50 IOPS, 44.10 MiB/s [2024-12-06T14:41:42.874Z] 11389.33 IOPS, 44.49 MiB/s [2024-12-06T14:41:42.874Z] 11415.00 IOPS, 44.59 MiB/s [2024-12-06T14:41:42.874Z] [2024-12-06 15:41:31.659763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.659986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.659994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.660010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.660025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.660039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.660055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.660069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.660083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.876 [2024-12-06 15:41:31.660097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.876 [2024-12-06 15:41:31.660104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.877 [2024-12-06 15:41:31.660224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.877 [2024-12-06 15:41:31.660239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.877 [2024-12-06 15:41:31.660253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.877 [2024-12-06 15:41:31.660608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.877 [2024-12-06 15:41:31.660616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.878 [2024-12-06 15:41:31.660622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.660991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.660998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.878 [2024-12-06 15:41:31.661179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.878 [2024-12-06 15:41:31.661185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.879 [2024-12-06 15:41:31.661651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.879 [2024-12-06 15:41:31.661678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51712 len:8 PRP1 0x0 PRP2 0x0 00:24:36.879 [2024-12-06 15:41:31.661684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:36.879 [2024-12-06 15:41:31.661701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.879 [2024-12-06 15:41:31.661707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51720 len:8 PRP1 0x0 PRP2 0x0 00:24:36.879 [2024-12-06 15:41:31.661714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661758] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:36.879 [2024-12-06 15:41:31.661781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.879 [2024-12-06 15:41:31.661788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.879 [2024-12-06 15:41:31.661803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.879 [2024-12-06 15:41:31.661816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.879 [2024-12-06 15:41:31.661829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.879 [2024-12-06 15:41:31.661836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:36.879 [2024-12-06 15:41:31.664627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:36.880 [2024-12-06 15:41:31.664656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefa0 (9): Bad file descriptor 00:24:36.880 [2024-12-06 15:41:31.686701] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:36.880 11338.40 IOPS, 44.29 MiB/s [2024-12-06T14:41:42.878Z] 11367.83 IOPS, 44.41 MiB/s [2024-12-06T14:41:42.878Z] 11365.29 IOPS, 44.40 MiB/s [2024-12-06T14:41:42.878Z] 11383.38 IOPS, 44.47 MiB/s [2024-12-06T14:41:42.878Z] 11392.33 IOPS, 44.50 MiB/s [2024-12-06T14:41:42.878Z] [2024-12-06 15:41:36.091956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.880 [2024-12-06 15:41:36.091998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.880 [2024-12-06 15:41:36.092015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.880 [2024-12-06 15:41:36.092029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.880 [2024-12-06 15:41:36.092043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10eefa0 is same with the state(6) to be set 00:24:36.880 [2024-12-06 15:41:36.092875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.880 [2024-12-06 15:41:36.092892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.880 [2024-12-06 15:41:36.092912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.092929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.092943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.092959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.092974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.092988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.092996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.880 [2024-12-06 15:41:36.093380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.880 [2024-12-06 15:41:36.093387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.881 [2024-12-06 15:41:36.093598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.881 [2024-12-06 15:41:36.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.881 [2024-12-06 15:41:36.093900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.093906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.093914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.093920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.093928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.093934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.093942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.093949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.093957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.093963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.093971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.093977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.093985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.093991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.093999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:36.882 [2024-12-06 15:41:36.094094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.882 [2024-12-06 15:41:36.094471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.882 [2024-12-06 15:41:36.094479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.883 [2024-12-06 15:41:36.094759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:36.883 [2024-12-06 15:41:36.094784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:36.883 [2024-12-06 15:41:36.094790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71448 len:8 PRP1 0x0 PRP2 0x0 00:24:36.883 [2024-12-06 15:41:36.094797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.883 [2024-12-06 15:41:36.094842] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:36.883 [2024-12-06 15:41:36.094852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:36.883 [2024-12-06 15:41:36.097633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:36.883 [2024-12-06 15:41:36.097661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10eefa0 (9): Bad file descriptor 00:24:36.883 [2024-12-06 15:41:36.128207] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:36.883 11363.60 IOPS, 44.39 MiB/s [2024-12-06T14:41:42.881Z] 11367.09 IOPS, 44.40 MiB/s [2024-12-06T14:41:42.881Z] 11381.67 IOPS, 44.46 MiB/s [2024-12-06T14:41:42.881Z] 11394.31 IOPS, 44.51 MiB/s [2024-12-06T14:41:42.881Z] 11404.79 IOPS, 44.55 MiB/s [2024-12-06T14:41:42.881Z] 11417.33 IOPS, 44.60 MiB/s 00:24:36.883 Latency(us) 00:24:36.883 [2024-12-06T14:41:42.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.883 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:36.883 Verification LBA range: start 0x0 length 0x4000 00:24:36.883 NVMe0n1 : 15.01 11419.23 44.61 278.04 0.00 10920.87 431.06 15291.73 00:24:36.883 [2024-12-06T14:41:42.881Z] =================================================================================================================== 00:24:36.883 [2024-12-06T14:41:42.881Z] Total : 11419.23 44.61 278.04 0.00 10920.87 431.06 15291.73 00:24:36.883 Received shutdown signal, test time was about 15.000000 seconds 00:24:36.883 00:24:36.883 Latency(us) 00:24:36.883 [2024-12-06T14:41:42.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.883 [2024-12-06T14:41:42.881Z] =================================================================================================================== 00:24:36.883 [2024-12-06T14:41:42.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3109740 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3109740 /var/tmp/bdevperf.sock 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3109740 ']' 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.883 [2024-12-06 15:41:42.624545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:36.883 [2024-12-06 15:41:42.825142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:36.883 15:41:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:37.452 NVMe0n1 00:24:37.452 15:41:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:37.712 00:24:37.712 15:41:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:37.972 00:24:37.972 15:41:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.972 15:41:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:38.231 15:41:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.231 15:41:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:41.529 15:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.529 15:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:41.529 15:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:41.529 15:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3110659 00:24:41.529 15:41:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3110659 00:24:42.902 { 00:24:42.902 "results": [ 00:24:42.902 { 00:24:42.902 "job": "NVMe0n1", 00:24:42.902 "core_mask": "0x1", 00:24:42.902 "workload": "verify", 00:24:42.902 "status": "finished", 00:24:42.902 "verify_range": { 00:24:42.902 "start": 0, 00:24:42.902 "length": 16384 00:24:42.902 }, 00:24:42.902 "queue_depth": 128, 00:24:42.902 "io_size": 4096, 00:24:42.902 "runtime": 1.044498, 00:24:42.902 "iops": 10947.842887205146, 00:24:42.902 "mibps": 42.7650112781451, 00:24:42.902 "io_failed": 0, 00:24:42.902 "io_timeout": 0, 00:24:42.902 "avg_latency_us": 11195.632355466716, 00:24:42.902 "min_latency_us": 2090.9104761904764, 00:24:42.902 "max_latency_us": 42941.68380952381 00:24:42.902 } 00:24:42.902 ], 00:24:42.902 "core_count": 1 00:24:42.902 } 00:24:42.903 15:41:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.903 [2024-12-06 15:41:42.227172] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:24:42.903 [2024-12-06 15:41:42.227226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109740 ] 00:24:42.903 [2024-12-06 15:41:42.301910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.903 [2024-12-06 15:41:42.339230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.903 [2024-12-06 15:41:44.174548] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:42.903 [2024-12-06 15:41:44.174595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.903 [2024-12-06 15:41:44.174606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.903 [2024-12-06 15:41:44.174615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.903 [2024-12-06 15:41:44.174622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.903 [2024-12-06 15:41:44.174630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.903 [2024-12-06 15:41:44.174636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.903 [2024-12-06 15:41:44.174643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.903 [2024-12-06 15:41:44.174650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.903 [2024-12-06 15:41:44.174657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:42.903 [2024-12-06 15:41:44.174682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:42.903 [2024-12-06 15:41:44.174696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd92fa0 (9): Bad file descriptor 00:24:42.903 [2024-12-06 15:41:44.222306] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:42.903 Running I/O for 1 seconds... 00:24:42.903 11307.00 IOPS, 44.17 MiB/s 00:24:42.903 Latency(us) 00:24:42.903 [2024-12-06T14:41:48.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.903 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:42.903 Verification LBA range: start 0x0 length 0x4000 00:24:42.903 NVMe0n1 : 1.04 10947.84 42.77 0.00 0.00 11195.63 2090.91 42941.68 00:24:42.903 [2024-12-06T14:41:48.901Z] =================================================================================================================== 00:24:42.903 [2024-12-06T14:41:48.901Z] Total : 10947.84 42.77 0.00 0.00 11195.63 2090.91 42941.68 00:24:42.903 15:41:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:42.903 15:41:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:42.903 15:41:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:43.160 15:41:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.160 15:41:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:43.160 15:41:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:43.418 15:41:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3109740 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3109740 ']' 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3109740 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3109740 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3109740' 00:24:46.901 killing process with pid 3109740 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3109740 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3109740 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:46.901 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.160 15:41:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.160 rmmod nvme_tcp 00:24:47.160 rmmod nvme_fabrics 00:24:47.160 rmmod nvme_keyring 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3106724 ']' 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3106724 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3106724 ']' 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3106724 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3106724 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3106724' 00:24:47.161 killing process with pid 3106724 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3106724 00:24:47.161 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3106724 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.420 15:41:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.957 00:24:49.957 real 0m37.847s 00:24:49.957 user 1m59.483s 00:24:49.957 sys 0m8.105s 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 ************************************ 00:24:49.957 END TEST nvmf_failover 00:24:49.957 ************************************ 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.957 ************************************ 00:24:49.957 START TEST nvmf_host_discovery 00:24:49.957 ************************************ 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:49.957 * Looking for test storage... 00:24:49.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.957 --rc genhtml_branch_coverage=1 00:24:49.957 --rc genhtml_function_coverage=1 00:24:49.957 --rc genhtml_legend=1 00:24:49.957 --rc geninfo_all_blocks=1 00:24:49.957 --rc geninfo_unexecuted_blocks=1 00:24:49.957 00:24:49.957 ' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.957 --rc genhtml_branch_coverage=1 00:24:49.957 --rc genhtml_function_coverage=1 00:24:49.957 --rc genhtml_legend=1 00:24:49.957 --rc geninfo_all_blocks=1 00:24:49.957 --rc geninfo_unexecuted_blocks=1 00:24:49.957 00:24:49.957 ' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.957 --rc genhtml_branch_coverage=1 00:24:49.957 --rc genhtml_function_coverage=1 00:24:49.957 --rc genhtml_legend=1 00:24:49.957 --rc geninfo_all_blocks=1 00:24:49.957 --rc geninfo_unexecuted_blocks=1 00:24:49.957 00:24:49.957 ' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:49.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.957 --rc genhtml_branch_coverage=1 00:24:49.957 --rc genhtml_function_coverage=1 00:24:49.957 --rc genhtml_legend=1 00:24:49.957 --rc geninfo_all_blocks=1 00:24:49.957 --rc geninfo_unexecuted_blocks=1 00:24:49.957 00:24:49.957 ' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.957 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.958 15:41:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:56.526 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:56.526 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:56.526 Found net devices under 0000:86:00.0: cvl_0_0 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:56.526 Found net devices under 0000:86:00.1: cvl_0_1 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:24:56.526 00:24:56.526 --- 10.0.0.2 ping statistics --- 00:24:56.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.526 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:24:56.526 00:24:56.526 --- 10.0.0.1 ping statistics --- 00:24:56.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.526 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.526 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3115212 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3115212 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3115212 ']' 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 [2024-12-06 15:42:01.617128] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:24:56.527 [2024-12-06 15:42:01.617181] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.527 [2024-12-06 15:42:01.698887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.527 [2024-12-06 15:42:01.739595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.527 [2024-12-06 15:42:01.739628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.527 [2024-12-06 15:42:01.739635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.527 [2024-12-06 15:42:01.739641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.527 [2024-12-06 15:42:01.739645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.527 [2024-12-06 15:42:01.740206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 [2024-12-06 15:42:01.876718] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 [2024-12-06 15:42:01.888882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 null0 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 null1 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3115249 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3115249 /tmp/host.sock 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3115249 ']' 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:56.527 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.527 15:42:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 [2024-12-06 15:42:01.967056] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:24:56.527 [2024-12-06 15:42:01.967099] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115249 ] 00:24:56.527 [2024-12-06 15:42:02.041571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.527 [2024-12-06 15:42:02.082086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.527 [2024-12-06 15:42:02.510480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.527 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:56.787 15:42:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:57.355 [2024-12-06 15:42:03.204239] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:57.355 [2024-12-06 15:42:03.204257] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:57.355 [2024-12-06 15:42:03.204270] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:57.355 [2024-12-06 15:42:03.333661] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:57.614 [2024-12-06 15:42:03.434365] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:57.614 [2024-12-06 15:42:03.435093] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x233c920:1 started. 00:24:57.614 [2024-12-06 15:42:03.436443] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:57.614 [2024-12-06 15:42:03.436461] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:57.614 [2024-12-06 15:42:03.443449] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x233c920 was disconnected and freed. delete nvme_qpair. 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:57.874 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.133 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.134 15:42:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.393 [2024-12-06 15:42:04.132184] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x233cca0:1 started. 00:24:58.393 [2024-12-06 15:42:04.135008] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x233cca0 was disconnected and freed. delete nvme_qpair. 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.393 [2024-12-06 15:42:04.219067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:58.393 [2024-12-06 15:42:04.219501] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:58.393 [2024-12-06 15:42:04.219520] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.393 [2024-12-06 15:42:04.346231] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:58.393 15:42:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:58.653 [2024-12-06 15:42:04.450959] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:58.653 [2024-12-06 15:42:04.450992] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:58.653 [2024-12-06 15:42:04.451000] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:58.653 [2024-12-06 15:42:04.451005] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.590 [2024-12-06 15:42:05.478784] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:59.590 [2024-12-06 15:42:05.478806] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:59.590 [2024-12-06 15:42:05.486582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.590 [2024-12-06 15:42:05.486599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.590 [2024-12-06 15:42:05.486608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.590 [2024-12-06 15:42:05.486615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.590 [2024-12-06 15:42:05.486623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.590 [2024-12-06 15:42:05.486629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.590 [2024-12-06 15:42:05.486637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.590 [2024-12-06 15:42:05.486643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.590 [2024-12-06 15:42:05.486650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e930 is same with the state(6) to be set 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:59.590 [2024-12-06 15:42:05.496594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e930 (9): Bad file descriptor 00:24:59.590 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.590 [2024-12-06 15:42:05.506629] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:59.590 [2024-12-06 15:42:05.506640] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:59.590 [2024-12-06 15:42:05.506647] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:59.590 [2024-12-06 15:42:05.506652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.590 [2024-12-06 15:42:05.506669] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.590 [2024-12-06 15:42:05.506798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.590 [2024-12-06 15:42:05.506812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e930 with addr=10.0.0.2, port=4420 00:24:59.590 [2024-12-06 15:42:05.506820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e930 is same with the state(6) to be set 00:24:59.590 [2024-12-06 15:42:05.506831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e930 (9): Bad file descriptor 00:24:59.590 [2024-12-06 15:42:05.506841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.590 [2024-12-06 15:42:05.506847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.590 [2024-12-06 15:42:05.506856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.590 [2024-12-06 15:42:05.506862] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.590 [2024-12-06 15:42:05.506867] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.590 [2024-12-06 15:42:05.506872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.590 [2024-12-06 15:42:05.516698] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:59.590 [2024-12-06 15:42:05.516709] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:59.591 [2024-12-06 15:42:05.516713] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.516718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.591 [2024-12-06 15:42:05.516731] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.516907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.591 [2024-12-06 15:42:05.516924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e930 with addr=10.0.0.2, port=4420 00:24:59.591 [2024-12-06 15:42:05.516932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e930 is same with the state(6) to be set 00:24:59.591 [2024-12-06 15:42:05.516942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e930 (9): Bad file descriptor 00:24:59.591 [2024-12-06 15:42:05.516962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.591 [2024-12-06 15:42:05.516968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.591 [2024-12-06 15:42:05.516979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.591 [2024-12-06 15:42:05.516985] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.591 [2024-12-06 15:42:05.516989] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.591 [2024-12-06 15:42:05.516993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.591 [2024-12-06 15:42:05.526762] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:59.591 [2024-12-06 15:42:05.526776] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:59.591 [2024-12-06 15:42:05.526780] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.526784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.591 [2024-12-06 15:42:05.526798] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.526906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.591 [2024-12-06 15:42:05.526918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e930 with addr=10.0.0.2, port=4420 00:24:59.591 [2024-12-06 15:42:05.526925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e930 is same with the state(6) to be set 00:24:59.591 [2024-12-06 15:42:05.526935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e930 (9): Bad file descriptor 00:24:59.591 [2024-12-06 15:42:05.526944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.591 [2024-12-06 15:42:05.526950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.591 [2024-12-06 15:42:05.526957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.591 [2024-12-06 15:42:05.526962] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.591 [2024-12-06 15:42:05.526967] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.591 [2024-12-06 15:42:05.526971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.591 [2024-12-06 15:42:05.536828] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:59.591 [2024-12-06 15:42:05.536842] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:59.591 [2024-12-06 15:42:05.536848] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.536853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.591 [2024-12-06 15:42:05.536866] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:59.591 [2024-12-06 15:42:05.537038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.591 [2024-12-06 15:42:05.537050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e930 with addr=10.0.0.2, port=4420 00:24:59.591 [2024-12-06 15:42:05.537057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e930 is same with the state(6) to be set 00:24:59.591 [2024-12-06 15:42:05.537067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e930 (9): Bad file descriptor 00:24:59.591 [2024-12-06 15:42:05.537088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.591 [2024-12-06 15:42:05.537095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.591 [2024-12-06 15:42:05.537103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.591 [2024-12-06 15:42:05.537108] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.591 [2024-12-06 15:42:05.537113] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.591 [2024-12-06 15:42:05.537117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.591 [2024-12-06 15:42:05.546897] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:59.591 [2024-12-06 15:42:05.546910] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:59.591 [2024-12-06 15:42:05.546915] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.546919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.591 [2024-12-06 15:42:05.546933] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.547178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.591 [2024-12-06 15:42:05.547190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e930 with addr=10.0.0.2, port=4420 00:24:59.591 [2024-12-06 15:42:05.547198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e930 is same with the state(6) to be set 00:24:59.591 [2024-12-06 15:42:05.547209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e930 (9): Bad file descriptor 00:24:59.591 [2024-12-06 15:42:05.547218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.591 [2024-12-06 15:42:05.547225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.591 [2024-12-06 15:42:05.547232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.591 [2024-12-06 15:42:05.547237] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.591 [2024-12-06 15:42:05.547242] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.591 [2024-12-06 15:42:05.547250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.591 [2024-12-06 15:42:05.556964] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:59.591 [2024-12-06 15:42:05.556973] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:59.591 [2024-12-06 15:42:05.556978] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.556982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:59.591 [2024-12-06 15:42:05.556993] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:59.591 [2024-12-06 15:42:05.557172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.591 [2024-12-06 15:42:05.557183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e930 with addr=10.0.0.2, port=4420 00:24:59.591 [2024-12-06 15:42:05.557190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e930 is same with the state(6) to be set 00:24:59.591 [2024-12-06 15:42:05.557200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e930 (9): Bad file descriptor 00:24:59.591 [2024-12-06 15:42:05.557214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:59.591 [2024-12-06 15:42:05.557221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:59.591 [2024-12-06 15:42:05.557228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:59.591 [2024-12-06 15:42:05.557233] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:59.591 [2024-12-06 15:42:05.557237] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:59.591 [2024-12-06 15:42:05.557241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:59.591 [2024-12-06 15:42:05.565789] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:59.591 [2024-12-06 15:42:05.565804] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.591 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.592 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.850 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.851 15:42:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.230 [2024-12-06 15:42:06.893447] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:01.230 [2024-12-06 15:42:06.893463] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:01.230 [2024-12-06 15:42:06.893475] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:01.230 [2024-12-06 15:42:06.979722] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:01.230 [2024-12-06 15:42:07.038307] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:01.230 [2024-12-06 15:42:07.038918] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2309c90:1 started. 00:25:01.230 [2024-12-06 15:42:07.040490] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:01.230 [2024-12-06 15:42:07.040516] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:01.230 [2024-12-06 15:42:07.042966] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2309c90 was disconnected and freed. delete nvme_qpair. 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.230 request: 00:25:01.230 { 00:25:01.230 "name": "nvme", 00:25:01.230 "trtype": "tcp", 00:25:01.230 "traddr": "10.0.0.2", 00:25:01.230 "adrfam": "ipv4", 00:25:01.230 "trsvcid": "8009", 00:25:01.230 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:01.230 "wait_for_attach": true, 00:25:01.230 "method": "bdev_nvme_start_discovery", 00:25:01.230 "req_id": 1 00:25:01.230 } 00:25:01.230 Got JSON-RPC error response 00:25:01.230 response: 00:25:01.230 { 00:25:01.230 "code": -17, 00:25:01.230 "message": "File exists" 00:25:01.230 } 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.230 request: 00:25:01.230 { 00:25:01.230 "name": "nvme_second", 00:25:01.230 "trtype": "tcp", 00:25:01.230 "traddr": "10.0.0.2", 00:25:01.230 "adrfam": "ipv4", 00:25:01.230 "trsvcid": "8009", 00:25:01.230 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:01.230 "wait_for_attach": true, 00:25:01.230 "method": "bdev_nvme_start_discovery", 00:25:01.230 "req_id": 1 00:25:01.230 } 00:25:01.230 Got JSON-RPC error response 00:25:01.230 response: 00:25:01.230 { 00:25:01.230 "code": -17, 00:25:01.230 "message": "File exists" 00:25:01.230 } 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.230 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.489 15:42:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.424 [2024-12-06 15:42:08.276170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.424 [2024-12-06 15:42:08.276196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2340760 with addr=10.0.0.2, port=8010 00:25:02.424 [2024-12-06 15:42:08.276209] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:02.424 [2024-12-06 15:42:08.276215] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:02.424 [2024-12-06 15:42:08.276221] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:03.360 [2024-12-06 15:42:09.278744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.360 [2024-12-06 15:42:09.278767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2340760 with addr=10.0.0.2, port=8010 00:25:03.360 [2024-12-06 15:42:09.278780] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:03.360 [2024-12-06 15:42:09.278787] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:03.360 [2024-12-06 15:42:09.278797] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:04.297 [2024-12-06 15:42:10.280909] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:04.297 request: 00:25:04.297 { 00:25:04.297 "name": "nvme_second", 00:25:04.297 "trtype": "tcp", 00:25:04.297 "traddr": "10.0.0.2", 00:25:04.297 "adrfam": "ipv4", 00:25:04.297 "trsvcid": "8010", 00:25:04.297 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:04.297 "wait_for_attach": false, 00:25:04.297 "attach_timeout_ms": 3000, 00:25:04.297 "method": "bdev_nvme_start_discovery", 00:25:04.297 "req_id": 1 00:25:04.297 } 00:25:04.297 Got JSON-RPC error response 00:25:04.297 response: 00:25:04.297 { 00:25:04.297 "code": -110, 00:25:04.297 "message": "Connection timed out" 00:25:04.297 } 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.297 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3115249 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.555 rmmod nvme_tcp 00:25:04.555 rmmod nvme_fabrics 00:25:04.555 rmmod nvme_keyring 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3115212 ']' 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3115212 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3115212 ']' 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3115212 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115212 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115212' 00:25:04.555 killing process with pid 3115212 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3115212 00:25:04.555 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3115212 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.814 15:42:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.720 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.720 00:25:06.720 real 0m17.240s 00:25:06.720 user 0m20.586s 00:25:06.720 sys 0m5.863s 00:25:06.720 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.720 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.720 ************************************ 00:25:06.720 END TEST nvmf_host_discovery 00:25:06.720 ************************************ 00:25:06.720 15:42:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:06.720 15:42:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.720 15:42:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.720 15:42:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.980 ************************************ 00:25:06.980 START TEST nvmf_host_multipath_status 00:25:06.980 ************************************ 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:06.980 * Looking for test storage... 00:25:06.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.980 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.981 --rc genhtml_branch_coverage=1 00:25:06.981 --rc genhtml_function_coverage=1 00:25:06.981 --rc genhtml_legend=1 00:25:06.981 --rc geninfo_all_blocks=1 00:25:06.981 --rc geninfo_unexecuted_blocks=1 00:25:06.981 00:25:06.981 ' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.981 --rc genhtml_branch_coverage=1 00:25:06.981 --rc genhtml_function_coverage=1 00:25:06.981 --rc genhtml_legend=1 00:25:06.981 --rc geninfo_all_blocks=1 00:25:06.981 --rc geninfo_unexecuted_blocks=1 00:25:06.981 00:25:06.981 ' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.981 --rc genhtml_branch_coverage=1 00:25:06.981 --rc genhtml_function_coverage=1 00:25:06.981 --rc genhtml_legend=1 00:25:06.981 --rc geninfo_all_blocks=1 00:25:06.981 --rc geninfo_unexecuted_blocks=1 00:25:06.981 00:25:06.981 ' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.981 --rc genhtml_branch_coverage=1 00:25:06.981 --rc genhtml_function_coverage=1 00:25:06.981 --rc genhtml_legend=1 00:25:06.981 --rc geninfo_all_blocks=1 00:25:06.981 --rc geninfo_unexecuted_blocks=1 00:25:06.981 00:25:06.981 ' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.981 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.982 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.982 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.982 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.982 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.982 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.982 15:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:13.554 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:13.554 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:13.554 Found net devices under 0000:86:00.0: cvl_0_0 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:13.554 Found net devices under 0000:86:00.1: cvl_0_1 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.554 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:25:13.555 00:25:13.555 --- 10.0.0.2 ping statistics --- 00:25:13.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.555 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:25:13.555 00:25:13.555 --- 10.0.0.1 ping statistics --- 00:25:13.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.555 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3120721 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3120721 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3120721 ']' 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.555 15:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.555 [2024-12-06 15:42:18.933265] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:25:13.555 [2024-12-06 15:42:18.933309] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.555 [2024-12-06 15:42:19.011823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:13.555 [2024-12-06 15:42:19.053002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.555 [2024-12-06 15:42:19.053037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.555 [2024-12-06 15:42:19.053044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.555 [2024-12-06 15:42:19.053050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.555 [2024-12-06 15:42:19.053055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.555 [2024-12-06 15:42:19.054248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.555 [2024-12-06 15:42:19.054251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3120721 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:13.555 [2024-12-06 15:42:19.352144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.555 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:13.814 Malloc0 00:25:13.814 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:14.073 15:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.073 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.332 [2024-12-06 15:42:20.190066] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.332 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:14.591 [2024-12-06 15:42:20.386543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3120976 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3120976 /var/tmp/bdevperf.sock 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3120976 ']' 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.591 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:14.850 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.850 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:14.850 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:15.109 15:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:15.368 Nvme0n1 00:25:15.368 15:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:15.627 Nvme0n1 00:25:15.627 15:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:15.627 15:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:18.164 15:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:18.164 15:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:18.164 15:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:18.164 15:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:19.101 15:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:19.101 15:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:19.101 15:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.101 15:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:19.359 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.359 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:19.359 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.359 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.617 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.617 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:19.617 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.617 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.617 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.617 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.618 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.618 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.875 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.875 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:19.875 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.875 15:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.134 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.134 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:20.134 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.134 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:20.392 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.392 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:20.393 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:20.652 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:20.910 15:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:21.847 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:21.847 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:21.847 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.847 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.107 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.107 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:22.107 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.107 15:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.366 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.625 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.625 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.625 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.625 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.884 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.884 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.884 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.884 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.143 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.143 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:23.143 15:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:23.402 15:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:23.402 15:42:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:24.780 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:24.781 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:24.781 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.781 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.781 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.781 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:24.781 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.781 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.039 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.039 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.039 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.039 15:42:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.039 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.039 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.039 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.039 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.297 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.297 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:25.297 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.297 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.554 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.554 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:25.554 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.554 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.812 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.812 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:25.812 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:26.070 15:42:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:26.070 15:42:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.443 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.701 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.958 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.958 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.958 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.958 15:42:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:28.216 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.216 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:28.216 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.216 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.474 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.474 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:28.474 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:28.731 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:28.989 15:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:29.926 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:29.926 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:29.926 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.926 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:30.184 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.184 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:30.184 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.184 15:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.184 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.184 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.184 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.184 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.441 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.441 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.441 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.441 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.699 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.699 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:30.699 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.699 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.956 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.956 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:30.956 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.956 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.956 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.956 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:30.956 15:42:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:31.215 15:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:31.473 15:42:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:32.409 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:32.409 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:32.409 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.409 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.669 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.669 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:32.669 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.669 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.928 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.928 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.928 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.928 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:33.187 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.187 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:33.187 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.187 15:42:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.187 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.187 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:33.187 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.187 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.446 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.446 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:33.446 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.446 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.706 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.706 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:33.964 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:33.964 15:42:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:34.223 15:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:34.223 15:42:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.678 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.007 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.008 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.008 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.008 15:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.290 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.549 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.549 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:36.549 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.808 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.067 15:42:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:38.004 15:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:38.004 15:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.004 15:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.004 15:42:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.263 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.263 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.263 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.263 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.522 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.522 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.522 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.522 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.780 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.039 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.039 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.039 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.039 15:42:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.298 15:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.298 15:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:39.298 15:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.557 15:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:39.816 15:42:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:40.753 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:40.753 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.753 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.753 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.012 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.012 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:41.012 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.012 15:42:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.271 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.271 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.271 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.271 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.529 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.787 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.787 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.787 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.787 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.046 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.046 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:42.046 15:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:42.305 15:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:42.564 15:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:43.501 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:43.501 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:43.501 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.501 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.760 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.019 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.020 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.020 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.020 15:42:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:44.278 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.278 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:44.278 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.279 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.536 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.536 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:44.537 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.537 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3120976 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3120976 ']' 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3120976 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120976 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120976' 00:25:44.795 killing process with pid 3120976 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3120976 00:25:44.795 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3120976 00:25:44.795 { 00:25:44.795 "results": [ 00:25:44.795 { 00:25:44.795 "job": "Nvme0n1", 00:25:44.795 "core_mask": "0x4", 00:25:44.795 "workload": "verify", 00:25:44.795 "status": "terminated", 00:25:44.795 "verify_range": { 00:25:44.795 "start": 0, 00:25:44.795 "length": 16384 00:25:44.795 }, 00:25:44.795 "queue_depth": 128, 00:25:44.795 "io_size": 4096, 00:25:44.795 "runtime": 28.945907, 00:25:44.795 "iops": 10723.795941167087, 00:25:44.795 "mibps": 41.889827895183934, 00:25:44.795 "io_failed": 0, 00:25:44.795 "io_timeout": 0, 00:25:44.795 "avg_latency_us": 11916.404336752776, 00:25:44.795 "min_latency_us": 237.95809523809524, 00:25:44.795 "max_latency_us": 3019898.88 00:25:44.795 } 00:25:44.795 ], 00:25:44.795 "core_count": 1 00:25:44.795 } 00:25:45.055 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3120976 00:25:45.055 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.055 [2024-12-06 15:42:20.447103] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:25:45.056 [2024-12-06 15:42:20.447156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120976 ] 00:25:45.056 [2024-12-06 15:42:20.521533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.056 [2024-12-06 15:42:20.562428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.056 Running I/O for 90 seconds... 00:25:45.056 11372.00 IOPS, 44.42 MiB/s [2024-12-06T14:42:51.054Z] 11448.50 IOPS, 44.72 MiB/s [2024-12-06T14:42:51.054Z] 11505.67 IOPS, 44.94 MiB/s [2024-12-06T14:42:51.054Z] 11539.75 IOPS, 45.08 MiB/s [2024-12-06T14:42:51.054Z] 11569.00 IOPS, 45.19 MiB/s [2024-12-06T14:42:51.054Z] 11586.67 IOPS, 45.26 MiB/s [2024-12-06T14:42:51.054Z] 11590.14 IOPS, 45.27 MiB/s [2024-12-06T14:42:51.054Z] 11600.62 IOPS, 45.31 MiB/s [2024-12-06T14:42:51.054Z] 11587.89 IOPS, 45.27 MiB/s [2024-12-06T14:42:51.054Z] 11563.90 IOPS, 45.17 MiB/s [2024-12-06T14:42:51.054Z] 11565.45 IOPS, 45.18 MiB/s [2024-12-06T14:42:51.054Z] 11558.83 IOPS, 45.15 MiB/s [2024-12-06T14:42:51.054Z] [2024-12-06 15:42:34.517895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.517933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.517970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.517979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.517993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.518980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.518993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:45.056 [2024-12-06 15:42:34.519427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.056 [2024-12-06 15:42:34.519434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.519987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.519994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:45.057 [2024-12-06 15:42:34.520311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.057 [2024-12-06 15:42:34.520319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.520919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.058 [2024-12-06 15:42:34.520942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.058 [2024-12-06 15:42:34.520965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.520981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.058 [2024-12-06 15:42:34.520988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.058 [2024-12-06 15:42:34.521010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.058 [2024-12-06 15:42:34.521033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.058 [2024-12-06 15:42:34.521055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.058 [2024-12-06 15:42:34.521078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.521101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.521130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.521152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.521175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.521198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:45.058 [2024-12-06 15:42:34.521214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.058 [2024-12-06 15:42:34.521221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:34.521243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:34.521265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:34.521288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:34.521312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:34.521334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:34.521357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:34.521461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.059 [2024-12-06 15:42:34.521490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.059 [2024-12-06 15:42:34.521515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.059 [2024-12-06 15:42:34.521540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.059 [2024-12-06 15:42:34.521565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.059 [2024-12-06 15:42:34.521590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.059 [2024-12-06 15:42:34.521615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:34.521633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.059 [2024-12-06 15:42:34.521640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:45.059 11415.62 IOPS, 44.59 MiB/s [2024-12-06T14:42:51.057Z] 10600.21 IOPS, 41.41 MiB/s [2024-12-06T14:42:51.057Z] 9893.53 IOPS, 38.65 MiB/s [2024-12-06T14:42:51.057Z] 9374.06 IOPS, 36.62 MiB/s [2024-12-06T14:42:51.057Z] 9490.24 IOPS, 37.07 MiB/s [2024-12-06T14:42:51.057Z] 9605.11 IOPS, 37.52 MiB/s [2024-12-06T14:42:51.057Z] 9766.53 IOPS, 38.15 MiB/s [2024-12-06T14:42:51.057Z] 9957.05 IOPS, 38.89 MiB/s [2024-12-06T14:42:51.057Z] 10130.29 IOPS, 39.57 MiB/s [2024-12-06T14:42:51.057Z] 10212.68 IOPS, 39.89 MiB/s [2024-12-06T14:42:51.057Z] 10270.22 IOPS, 40.12 MiB/s [2024-12-06T14:42:51.057Z] 10331.21 IOPS, 40.36 MiB/s [2024-12-06T14:42:51.057Z] 10456.64 IOPS, 40.85 MiB/s [2024-12-06T14:42:51.057Z] 10587.46 IOPS, 41.36 MiB/s [2024-12-06T14:42:51.057Z] [2024-12-06 15:42:48.325569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.325801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.325808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.059 [2024-12-06 15:42:48.326910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:45.059 [2024-12-06 15:42:48.326922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.326930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.326943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.326950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.326962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.326969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.326982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.326989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.060 [2024-12-06 15:42:48.327907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:45.060 [2024-12-06 15:42:48.327920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.061 [2024-12-06 15:42:48.327927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:45.061 10672.11 IOPS, 41.69 MiB/s [2024-12-06T14:42:51.059Z] 10706.71 IOPS, 41.82 MiB/s [2024-12-06T14:42:51.059Z] Received shutdown signal, test time was about 28.946550 seconds 00:25:45.061 00:25:45.061 Latency(us) 00:25:45.061 [2024-12-06T14:42:51.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.061 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:45.061 Verification LBA range: start 0x0 length 0x4000 00:25:45.061 Nvme0n1 : 28.95 10723.80 41.89 0.00 0.00 11916.40 237.96 3019898.88 00:25:45.061 [2024-12-06T14:42:51.059Z] =================================================================================================================== 00:25:45.061 [2024-12-06T14:42:51.059Z] Total : 10723.80 41.89 0.00 0.00 11916.40 237.96 3019898.88 00:25:45.061 15:42:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:45.061 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:45.061 rmmod nvme_tcp 00:25:45.061 rmmod nvme_fabrics 00:25:45.061 rmmod nvme_keyring 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3120721 ']' 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3120721 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3120721 ']' 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3120721 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120721 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120721' 00:25:45.319 killing process with pid 3120721 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3120721 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3120721 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.319 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.576 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.576 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.576 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.576 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.576 15:42:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.507 00:25:47.507 real 0m40.637s 00:25:47.507 user 1m50.210s 00:25:47.507 sys 0m11.593s 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.507 ************************************ 00:25:47.507 END TEST nvmf_host_multipath_status 00:25:47.507 ************************************ 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.507 ************************************ 00:25:47.507 START TEST nvmf_discovery_remove_ifc 00:25:47.507 ************************************ 00:25:47.507 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:47.766 * Looking for test storage... 00:25:47.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.766 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:47.766 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:47.766 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:47.766 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:47.766 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.766 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:47.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.767 --rc genhtml_branch_coverage=1 00:25:47.767 --rc genhtml_function_coverage=1 00:25:47.767 --rc genhtml_legend=1 00:25:47.767 --rc geninfo_all_blocks=1 00:25:47.767 --rc geninfo_unexecuted_blocks=1 00:25:47.767 00:25:47.767 ' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:47.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.767 --rc genhtml_branch_coverage=1 00:25:47.767 --rc genhtml_function_coverage=1 00:25:47.767 --rc genhtml_legend=1 00:25:47.767 --rc geninfo_all_blocks=1 00:25:47.767 --rc geninfo_unexecuted_blocks=1 00:25:47.767 00:25:47.767 ' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:47.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.767 --rc genhtml_branch_coverage=1 00:25:47.767 --rc genhtml_function_coverage=1 00:25:47.767 --rc genhtml_legend=1 00:25:47.767 --rc geninfo_all_blocks=1 00:25:47.767 --rc geninfo_unexecuted_blocks=1 00:25:47.767 00:25:47.767 ' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:47.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.767 --rc genhtml_branch_coverage=1 00:25:47.767 --rc genhtml_function_coverage=1 00:25:47.767 --rc genhtml_legend=1 00:25:47.767 --rc geninfo_all_blocks=1 00:25:47.767 --rc geninfo_unexecuted_blocks=1 00:25:47.767 00:25:47.767 ' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:47.767 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.768 15:42:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.335 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:54.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:54.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:54.336 Found net devices under 0000:86:00.0: cvl_0_0 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:54.336 Found net devices under 0000:86:00.1: cvl_0_1 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:25:54.336 00:25:54.336 --- 10.0.0.2 ping statistics --- 00:25:54.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.336 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:25:54.336 00:25:54.336 --- 10.0.0.1 ping statistics --- 00:25:54.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.336 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3129648 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:54.336 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3129648 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3129648 ']' 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.337 [2024-12-06 15:42:59.672005] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:25:54.337 [2024-12-06 15:42:59.672050] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.337 [2024-12-06 15:42:59.737274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.337 [2024-12-06 15:42:59.775360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.337 [2024-12-06 15:42:59.775402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.337 [2024-12-06 15:42:59.775410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.337 [2024-12-06 15:42:59.775416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.337 [2024-12-06 15:42:59.775421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.337 [2024-12-06 15:42:59.775990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.337 [2024-12-06 15:42:59.927644] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.337 [2024-12-06 15:42:59.935839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:54.337 null0 00:25:54.337 [2024-12-06 15:42:59.967802] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3129755 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3129755 /tmp/host.sock 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3129755 ']' 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:54.337 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.337 15:42:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.337 [2024-12-06 15:43:00.037919] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:25:54.337 [2024-12-06 15:43:00.037967] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129755 ] 00:25:54.337 [2024-12-06 15:43:00.111596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.337 [2024-12-06 15:43:00.154400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.337 15:43:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.717 [2024-12-06 15:43:01.293781] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:55.717 [2024-12-06 15:43:01.293799] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:55.717 [2024-12-06 15:43:01.293810] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.717 [2024-12-06 15:43:01.380068] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:55.717 [2024-12-06 15:43:01.554902] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:55.717 [2024-12-06 15:43:01.555535] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1fc6940:1 started. 00:25:55.717 [2024-12-06 15:43:01.556838] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:55.717 [2024-12-06 15:43:01.556877] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:55.717 [2024-12-06 15:43:01.556897] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:55.717 [2024-12-06 15:43:01.556909] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.717 [2024-12-06 15:43:01.556927] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.717 [2024-12-06 15:43:01.563068] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1fc6940 was disconnected and freed. delete nvme_qpair. 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:55.717 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:55.718 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:55.718 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:55.718 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.976 15:43:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.913 15:43:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.850 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.109 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.109 15:43:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.045 15:43:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.984 15:43:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.361 15:43:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.361 [2024-12-06 15:43:06.998537] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:01.361 [2024-12-06 15:43:06.998571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.361 [2024-12-06 15:43:06.998581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.361 [2024-12-06 15:43:06.998590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.361 [2024-12-06 15:43:06.998597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.361 [2024-12-06 15:43:06.998605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.361 [2024-12-06 15:43:06.998611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.361 [2024-12-06 15:43:06.998619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.361 [2024-12-06 15:43:06.998626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.361 [2024-12-06 15:43:06.998633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:01.361 [2024-12-06 15:43:06.998640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.361 [2024-12-06 15:43:06.998647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3160 is same with the state(6) to be set 00:26:01.361 [2024-12-06 15:43:07.008559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa3160 (9): Bad file descriptor 00:26:01.361 [2024-12-06 15:43:07.018594] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:01.361 [2024-12-06 15:43:07.018606] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:01.361 [2024-12-06 15:43:07.018612] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:01.361 [2024-12-06 15:43:07.018617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:01.361 [2024-12-06 15:43:07.018635] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:01.361 15:43:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.361 15:43:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.298 [2024-12-06 15:43:08.082419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:02.298 [2024-12-06 15:43:08.082499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa3160 with addr=10.0.0.2, port=4420 00:26:02.298 [2024-12-06 15:43:08.082533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3160 is same with the state(6) to be set 00:26:02.298 [2024-12-06 15:43:08.082585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa3160 (9): Bad file descriptor 00:26:02.298 [2024-12-06 15:43:08.083546] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:02.298 [2024-12-06 15:43:08.083609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:02.298 [2024-12-06 15:43:08.083633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:02.298 [2024-12-06 15:43:08.083657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:02.298 [2024-12-06 15:43:08.083677] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:02.298 [2024-12-06 15:43:08.083694] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:02.298 [2024-12-06 15:43:08.083708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:02.298 [2024-12-06 15:43:08.083730] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:02.298 [2024-12-06 15:43:08.083744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.298 15:43:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.234 [2024-12-06 15:43:09.086260] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:03.234 [2024-12-06 15:43:09.086284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:03.234 [2024-12-06 15:43:09.086295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:03.234 [2024-12-06 15:43:09.086301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:03.234 [2024-12-06 15:43:09.086308] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:03.234 [2024-12-06 15:43:09.086314] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:03.234 [2024-12-06 15:43:09.086319] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:03.234 [2024-12-06 15:43:09.086323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:03.234 [2024-12-06 15:43:09.086342] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:03.234 [2024-12-06 15:43:09.086361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.234 [2024-12-06 15:43:09.086374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.234 [2024-12-06 15:43:09.086383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.234 [2024-12-06 15:43:09.086389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.234 [2024-12-06 15:43:09.086396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.234 [2024-12-06 15:43:09.086403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.234 [2024-12-06 15:43:09.086410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.234 [2024-12-06 15:43:09.086416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.234 [2024-12-06 15:43:09.086423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.234 [2024-12-06 15:43:09.086430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.234 [2024-12-06 15:43:09.086436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:03.234 [2024-12-06 15:43:09.086737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f92450 (9): Bad file descriptor 00:26:03.234 [2024-12-06 15:43:09.087747] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:03.234 [2024-12-06 15:43:09.087758] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.235 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:03.493 15:43:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:04.426 15:43:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.359 [2024-12-06 15:43:11.139518] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:05.359 [2024-12-06 15:43:11.139536] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:05.359 [2024-12-06 15:43:11.139551] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.359 [2024-12-06 15:43:11.226800] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:05.359 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.360 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.360 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.360 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.360 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.360 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.360 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.617 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.617 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:05.617 15:43:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.617 [2024-12-06 15:43:11.410838] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:05.617 [2024-12-06 15:43:11.411481] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1f77090:1 started. 00:26:05.617 [2024-12-06 15:43:11.412498] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:05.617 [2024-12-06 15:43:11.412530] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:05.617 [2024-12-06 15:43:11.412547] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:05.617 [2024-12-06 15:43:11.412559] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:05.617 [2024-12-06 15:43:11.412566] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:05.617 [2024-12-06 15:43:11.418347] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1f77090 was disconnected and freed. delete nvme_qpair. 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3129755 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3129755 ']' 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3129755 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129755 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129755' 00:26:06.551 killing process with pid 3129755 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3129755 00:26:06.551 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3129755 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.809 rmmod nvme_tcp 00:26:06.809 rmmod nvme_fabrics 00:26:06.809 rmmod nvme_keyring 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3129648 ']' 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3129648 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3129648 ']' 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3129648 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129648 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129648' 00:26:06.809 killing process with pid 3129648 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3129648 00:26:06.809 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3129648 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.068 15:43:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.603 15:43:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.603 00:26:09.603 real 0m21.539s 00:26:09.603 user 0m26.740s 00:26:09.603 sys 0m5.941s 00:26:09.603 15:43:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.603 15:43:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.603 ************************************ 00:26:09.603 END TEST nvmf_discovery_remove_ifc 00:26:09.603 ************************************ 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.603 ************************************ 00:26:09.603 START TEST nvmf_identify_kernel_target 00:26:09.603 ************************************ 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:09.603 * Looking for test storage... 00:26:09.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:09.603 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:09.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.604 --rc genhtml_branch_coverage=1 00:26:09.604 --rc genhtml_function_coverage=1 00:26:09.604 --rc genhtml_legend=1 00:26:09.604 --rc geninfo_all_blocks=1 00:26:09.604 --rc geninfo_unexecuted_blocks=1 00:26:09.604 00:26:09.604 ' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:09.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.604 --rc genhtml_branch_coverage=1 00:26:09.604 --rc genhtml_function_coverage=1 00:26:09.604 --rc genhtml_legend=1 00:26:09.604 --rc geninfo_all_blocks=1 00:26:09.604 --rc geninfo_unexecuted_blocks=1 00:26:09.604 00:26:09.604 ' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:09.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.604 --rc genhtml_branch_coverage=1 00:26:09.604 --rc genhtml_function_coverage=1 00:26:09.604 --rc genhtml_legend=1 00:26:09.604 --rc geninfo_all_blocks=1 00:26:09.604 --rc geninfo_unexecuted_blocks=1 00:26:09.604 00:26:09.604 ' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:09.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.604 --rc genhtml_branch_coverage=1 00:26:09.604 --rc genhtml_function_coverage=1 00:26:09.604 --rc genhtml_legend=1 00:26:09.604 --rc geninfo_all_blocks=1 00:26:09.604 --rc geninfo_unexecuted_blocks=1 00:26:09.604 00:26:09.604 ' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:09.604 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.605 15:43:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.178 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:16.179 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:16.179 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:16.179 Found net devices under 0000:86:00.0: cvl_0_0 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:16.179 Found net devices under 0000:86:00.1: cvl_0_1 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.179 15:43:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:16.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:26:16.179 00:26:16.179 --- 10.0.0.2 ping statistics --- 00:26:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.179 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:26:16.179 00:26:16.179 --- 10.0.0.1 ping statistics --- 00:26:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.179 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.179 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:16.180 15:43:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:18.086 Waiting for block devices as requested 00:26:18.086 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:18.343 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:18.343 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:18.343 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:18.601 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:18.601 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:18.601 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:18.601 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:18.858 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:18.858 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:18.858 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:19.116 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:19.116 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:19.116 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:19.374 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:19.374 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:19.374 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:19.632 No valid GPT data, bailing 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:19.632 00:26:19.632 Discovery Log Number of Records 2, Generation counter 2 00:26:19.632 =====Discovery Log Entry 0====== 00:26:19.632 trtype: tcp 00:26:19.632 adrfam: ipv4 00:26:19.632 subtype: current discovery subsystem 00:26:19.632 treq: not specified, sq flow control disable supported 00:26:19.632 portid: 1 00:26:19.632 trsvcid: 4420 00:26:19.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:19.632 traddr: 10.0.0.1 00:26:19.632 eflags: none 00:26:19.632 sectype: none 00:26:19.632 =====Discovery Log Entry 1====== 00:26:19.632 trtype: tcp 00:26:19.632 adrfam: ipv4 00:26:19.632 subtype: nvme subsystem 00:26:19.632 treq: not specified, sq flow control disable supported 00:26:19.632 portid: 1 00:26:19.632 trsvcid: 4420 00:26:19.632 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:19.632 traddr: 10.0.0.1 00:26:19.632 eflags: none 00:26:19.632 sectype: none 00:26:19.632 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:19.632 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:19.891 ===================================================== 00:26:19.891 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:19.891 ===================================================== 00:26:19.891 Controller Capabilities/Features 00:26:19.891 ================================ 00:26:19.891 Vendor ID: 0000 00:26:19.891 Subsystem Vendor ID: 0000 00:26:19.891 Serial Number: 2e9e3ba37ce5b1d4bf98 00:26:19.891 Model Number: Linux 00:26:19.891 Firmware Version: 6.8.9-20 00:26:19.891 Recommended Arb Burst: 0 00:26:19.891 IEEE OUI Identifier: 00 00 00 00:26:19.891 Multi-path I/O 00:26:19.891 May have multiple subsystem ports: No 00:26:19.891 May have multiple controllers: No 00:26:19.891 Associated with SR-IOV VF: No 00:26:19.891 Max Data Transfer Size: Unlimited 00:26:19.891 Max Number of Namespaces: 0 00:26:19.891 Max Number of I/O Queues: 1024 00:26:19.891 NVMe Specification Version (VS): 1.3 00:26:19.892 NVMe Specification Version (Identify): 1.3 00:26:19.892 Maximum Queue Entries: 1024 00:26:19.892 Contiguous Queues Required: No 00:26:19.892 Arbitration Mechanisms Supported 00:26:19.892 Weighted Round Robin: Not Supported 00:26:19.892 Vendor Specific: Not Supported 00:26:19.892 Reset Timeout: 7500 ms 00:26:19.892 Doorbell Stride: 4 bytes 00:26:19.892 NVM Subsystem Reset: Not Supported 00:26:19.892 Command Sets Supported 00:26:19.892 NVM Command Set: Supported 00:26:19.892 Boot Partition: Not Supported 00:26:19.892 Memory Page Size Minimum: 4096 bytes 00:26:19.892 Memory Page Size Maximum: 4096 bytes 00:26:19.892 Persistent Memory Region: Not Supported 00:26:19.892 Optional Asynchronous Events Supported 00:26:19.892 Namespace Attribute Notices: Not Supported 00:26:19.892 Firmware Activation Notices: Not Supported 00:26:19.892 ANA Change Notices: Not Supported 00:26:19.892 PLE Aggregate Log Change Notices: Not Supported 00:26:19.892 LBA Status Info Alert Notices: Not Supported 00:26:19.892 EGE Aggregate Log Change Notices: Not Supported 00:26:19.892 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.892 Zone Descriptor Change Notices: Not Supported 00:26:19.892 Discovery Log Change Notices: Supported 00:26:19.892 Controller Attributes 00:26:19.892 128-bit Host Identifier: Not Supported 00:26:19.892 Non-Operational Permissive Mode: Not Supported 00:26:19.892 NVM Sets: Not Supported 00:26:19.892 Read Recovery Levels: Not Supported 00:26:19.892 Endurance Groups: Not Supported 00:26:19.892 Predictable Latency Mode: Not Supported 00:26:19.892 Traffic Based Keep ALive: Not Supported 00:26:19.892 Namespace Granularity: Not Supported 00:26:19.892 SQ Associations: Not Supported 00:26:19.892 UUID List: Not Supported 00:26:19.892 Multi-Domain Subsystem: Not Supported 00:26:19.892 Fixed Capacity Management: Not Supported 00:26:19.892 Variable Capacity Management: Not Supported 00:26:19.892 Delete Endurance Group: Not Supported 00:26:19.892 Delete NVM Set: Not Supported 00:26:19.892 Extended LBA Formats Supported: Not Supported 00:26:19.892 Flexible Data Placement Supported: Not Supported 00:26:19.892 00:26:19.892 Controller Memory Buffer Support 00:26:19.892 ================================ 00:26:19.892 Supported: No 00:26:19.892 00:26:19.892 Persistent Memory Region Support 00:26:19.892 ================================ 00:26:19.892 Supported: No 00:26:19.892 00:26:19.892 Admin Command Set Attributes 00:26:19.892 ============================ 00:26:19.892 Security Send/Receive: Not Supported 00:26:19.892 Format NVM: Not Supported 00:26:19.892 Firmware Activate/Download: Not Supported 00:26:19.892 Namespace Management: Not Supported 00:26:19.892 Device Self-Test: Not Supported 00:26:19.892 Directives: Not Supported 00:26:19.892 NVMe-MI: Not Supported 00:26:19.892 Virtualization Management: Not Supported 00:26:19.892 Doorbell Buffer Config: Not Supported 00:26:19.892 Get LBA Status Capability: Not Supported 00:26:19.892 Command & Feature Lockdown Capability: Not Supported 00:26:19.892 Abort Command Limit: 1 00:26:19.892 Async Event Request Limit: 1 00:26:19.892 Number of Firmware Slots: N/A 00:26:19.892 Firmware Slot 1 Read-Only: N/A 00:26:19.892 Firmware Activation Without Reset: N/A 00:26:19.892 Multiple Update Detection Support: N/A 00:26:19.892 Firmware Update Granularity: No Information Provided 00:26:19.892 Per-Namespace SMART Log: No 00:26:19.892 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.892 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:19.892 Command Effects Log Page: Not Supported 00:26:19.892 Get Log Page Extended Data: Supported 00:26:19.892 Telemetry Log Pages: Not Supported 00:26:19.892 Persistent Event Log Pages: Not Supported 00:26:19.892 Supported Log Pages Log Page: May Support 00:26:19.892 Commands Supported & Effects Log Page: Not Supported 00:26:19.892 Feature Identifiers & Effects Log Page:May Support 00:26:19.892 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.892 Data Area 4 for Telemetry Log: Not Supported 00:26:19.892 Error Log Page Entries Supported: 1 00:26:19.892 Keep Alive: Not Supported 00:26:19.892 00:26:19.892 NVM Command Set Attributes 00:26:19.892 ========================== 00:26:19.892 Submission Queue Entry Size 00:26:19.892 Max: 1 00:26:19.892 Min: 1 00:26:19.892 Completion Queue Entry Size 00:26:19.892 Max: 1 00:26:19.892 Min: 1 00:26:19.892 Number of Namespaces: 0 00:26:19.892 Compare Command: Not Supported 00:26:19.892 Write Uncorrectable Command: Not Supported 00:26:19.892 Dataset Management Command: Not Supported 00:26:19.892 Write Zeroes Command: Not Supported 00:26:19.892 Set Features Save Field: Not Supported 00:26:19.892 Reservations: Not Supported 00:26:19.892 Timestamp: Not Supported 00:26:19.892 Copy: Not Supported 00:26:19.892 Volatile Write Cache: Not Present 00:26:19.892 Atomic Write Unit (Normal): 1 00:26:19.892 Atomic Write Unit (PFail): 1 00:26:19.892 Atomic Compare & Write Unit: 1 00:26:19.892 Fused Compare & Write: Not Supported 00:26:19.892 Scatter-Gather List 00:26:19.892 SGL Command Set: Supported 00:26:19.892 SGL Keyed: Not Supported 00:26:19.892 SGL Bit Bucket Descriptor: Not Supported 00:26:19.892 SGL Metadata Pointer: Not Supported 00:26:19.892 Oversized SGL: Not Supported 00:26:19.892 SGL Metadata Address: Not Supported 00:26:19.892 SGL Offset: Supported 00:26:19.892 Transport SGL Data Block: Not Supported 00:26:19.892 Replay Protected Memory Block: Not Supported 00:26:19.892 00:26:19.892 Firmware Slot Information 00:26:19.892 ========================= 00:26:19.892 Active slot: 0 00:26:19.892 00:26:19.892 00:26:19.892 Error Log 00:26:19.892 ========= 00:26:19.892 00:26:19.892 Active Namespaces 00:26:19.892 ================= 00:26:19.892 Discovery Log Page 00:26:19.892 ================== 00:26:19.892 Generation Counter: 2 00:26:19.892 Number of Records: 2 00:26:19.892 Record Format: 0 00:26:19.892 00:26:19.892 Discovery Log Entry 0 00:26:19.892 ---------------------- 00:26:19.892 Transport Type: 3 (TCP) 00:26:19.892 Address Family: 1 (IPv4) 00:26:19.892 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:19.892 Entry Flags: 00:26:19.892 Duplicate Returned Information: 0 00:26:19.892 Explicit Persistent Connection Support for Discovery: 0 00:26:19.892 Transport Requirements: 00:26:19.892 Secure Channel: Not Specified 00:26:19.892 Port ID: 1 (0x0001) 00:26:19.892 Controller ID: 65535 (0xffff) 00:26:19.892 Admin Max SQ Size: 32 00:26:19.892 Transport Service Identifier: 4420 00:26:19.892 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:19.892 Transport Address: 10.0.0.1 00:26:19.892 Discovery Log Entry 1 00:26:19.892 ---------------------- 00:26:19.892 Transport Type: 3 (TCP) 00:26:19.892 Address Family: 1 (IPv4) 00:26:19.892 Subsystem Type: 2 (NVM Subsystem) 00:26:19.892 Entry Flags: 00:26:19.892 Duplicate Returned Information: 0 00:26:19.892 Explicit Persistent Connection Support for Discovery: 0 00:26:19.892 Transport Requirements: 00:26:19.892 Secure Channel: Not Specified 00:26:19.892 Port ID: 1 (0x0001) 00:26:19.892 Controller ID: 65535 (0xffff) 00:26:19.892 Admin Max SQ Size: 32 00:26:19.892 Transport Service Identifier: 4420 00:26:19.892 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:19.892 Transport Address: 10.0.0.1 00:26:19.892 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:19.892 get_feature(0x01) failed 00:26:19.892 get_feature(0x02) failed 00:26:19.892 get_feature(0x04) failed 00:26:19.892 ===================================================== 00:26:19.892 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:19.892 ===================================================== 00:26:19.892 Controller Capabilities/Features 00:26:19.892 ================================ 00:26:19.892 Vendor ID: 0000 00:26:19.892 Subsystem Vendor ID: 0000 00:26:19.892 Serial Number: a75acb5504bff95b5a75 00:26:19.892 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:19.892 Firmware Version: 6.8.9-20 00:26:19.892 Recommended Arb Burst: 6 00:26:19.892 IEEE OUI Identifier: 00 00 00 00:26:19.892 Multi-path I/O 00:26:19.892 May have multiple subsystem ports: Yes 00:26:19.892 May have multiple controllers: Yes 00:26:19.892 Associated with SR-IOV VF: No 00:26:19.892 Max Data Transfer Size: Unlimited 00:26:19.892 Max Number of Namespaces: 1024 00:26:19.892 Max Number of I/O Queues: 128 00:26:19.893 NVMe Specification Version (VS): 1.3 00:26:19.893 NVMe Specification Version (Identify): 1.3 00:26:19.893 Maximum Queue Entries: 1024 00:26:19.893 Contiguous Queues Required: No 00:26:19.893 Arbitration Mechanisms Supported 00:26:19.893 Weighted Round Robin: Not Supported 00:26:19.893 Vendor Specific: Not Supported 00:26:19.893 Reset Timeout: 7500 ms 00:26:19.893 Doorbell Stride: 4 bytes 00:26:19.893 NVM Subsystem Reset: Not Supported 00:26:19.893 Command Sets Supported 00:26:19.893 NVM Command Set: Supported 00:26:19.893 Boot Partition: Not Supported 00:26:19.893 Memory Page Size Minimum: 4096 bytes 00:26:19.893 Memory Page Size Maximum: 4096 bytes 00:26:19.893 Persistent Memory Region: Not Supported 00:26:19.893 Optional Asynchronous Events Supported 00:26:19.893 Namespace Attribute Notices: Supported 00:26:19.893 Firmware Activation Notices: Not Supported 00:26:19.893 ANA Change Notices: Supported 00:26:19.893 PLE Aggregate Log Change Notices: Not Supported 00:26:19.893 LBA Status Info Alert Notices: Not Supported 00:26:19.893 EGE Aggregate Log Change Notices: Not Supported 00:26:19.893 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.893 Zone Descriptor Change Notices: Not Supported 00:26:19.893 Discovery Log Change Notices: Not Supported 00:26:19.893 Controller Attributes 00:26:19.893 128-bit Host Identifier: Supported 00:26:19.893 Non-Operational Permissive Mode: Not Supported 00:26:19.893 NVM Sets: Not Supported 00:26:19.893 Read Recovery Levels: Not Supported 00:26:19.893 Endurance Groups: Not Supported 00:26:19.893 Predictable Latency Mode: Not Supported 00:26:19.893 Traffic Based Keep ALive: Supported 00:26:19.893 Namespace Granularity: Not Supported 00:26:19.893 SQ Associations: Not Supported 00:26:19.893 UUID List: Not Supported 00:26:19.893 Multi-Domain Subsystem: Not Supported 00:26:19.893 Fixed Capacity Management: Not Supported 00:26:19.893 Variable Capacity Management: Not Supported 00:26:19.893 Delete Endurance Group: Not Supported 00:26:19.893 Delete NVM Set: Not Supported 00:26:19.893 Extended LBA Formats Supported: Not Supported 00:26:19.893 Flexible Data Placement Supported: Not Supported 00:26:19.893 00:26:19.893 Controller Memory Buffer Support 00:26:19.893 ================================ 00:26:19.893 Supported: No 00:26:19.893 00:26:19.893 Persistent Memory Region Support 00:26:19.893 ================================ 00:26:19.893 Supported: No 00:26:19.893 00:26:19.893 Admin Command Set Attributes 00:26:19.893 ============================ 00:26:19.893 Security Send/Receive: Not Supported 00:26:19.893 Format NVM: Not Supported 00:26:19.893 Firmware Activate/Download: Not Supported 00:26:19.893 Namespace Management: Not Supported 00:26:19.893 Device Self-Test: Not Supported 00:26:19.893 Directives: Not Supported 00:26:19.893 NVMe-MI: Not Supported 00:26:19.893 Virtualization Management: Not Supported 00:26:19.893 Doorbell Buffer Config: Not Supported 00:26:19.893 Get LBA Status Capability: Not Supported 00:26:19.893 Command & Feature Lockdown Capability: Not Supported 00:26:19.893 Abort Command Limit: 4 00:26:19.893 Async Event Request Limit: 4 00:26:19.893 Number of Firmware Slots: N/A 00:26:19.893 Firmware Slot 1 Read-Only: N/A 00:26:19.893 Firmware Activation Without Reset: N/A 00:26:19.893 Multiple Update Detection Support: N/A 00:26:19.893 Firmware Update Granularity: No Information Provided 00:26:19.893 Per-Namespace SMART Log: Yes 00:26:19.893 Asymmetric Namespace Access Log Page: Supported 00:26:19.893 ANA Transition Time : 10 sec 00:26:19.893 00:26:19.893 Asymmetric Namespace Access Capabilities 00:26:19.893 ANA Optimized State : Supported 00:26:19.893 ANA Non-Optimized State : Supported 00:26:19.893 ANA Inaccessible State : Supported 00:26:19.893 ANA Persistent Loss State : Supported 00:26:19.893 ANA Change State : Supported 00:26:19.893 ANAGRPID is not changed : No 00:26:19.893 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:19.893 00:26:19.893 ANA Group Identifier Maximum : 128 00:26:19.893 Number of ANA Group Identifiers : 128 00:26:19.893 Max Number of Allowed Namespaces : 1024 00:26:19.893 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:19.893 Command Effects Log Page: Supported 00:26:19.893 Get Log Page Extended Data: Supported 00:26:19.893 Telemetry Log Pages: Not Supported 00:26:19.893 Persistent Event Log Pages: Not Supported 00:26:19.893 Supported Log Pages Log Page: May Support 00:26:19.893 Commands Supported & Effects Log Page: Not Supported 00:26:19.893 Feature Identifiers & Effects Log Page:May Support 00:26:19.893 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.893 Data Area 4 for Telemetry Log: Not Supported 00:26:19.893 Error Log Page Entries Supported: 128 00:26:19.893 Keep Alive: Supported 00:26:19.893 Keep Alive Granularity: 1000 ms 00:26:19.893 00:26:19.893 NVM Command Set Attributes 00:26:19.893 ========================== 00:26:19.893 Submission Queue Entry Size 00:26:19.893 Max: 64 00:26:19.893 Min: 64 00:26:19.893 Completion Queue Entry Size 00:26:19.893 Max: 16 00:26:19.893 Min: 16 00:26:19.893 Number of Namespaces: 1024 00:26:19.893 Compare Command: Not Supported 00:26:19.893 Write Uncorrectable Command: Not Supported 00:26:19.893 Dataset Management Command: Supported 00:26:19.893 Write Zeroes Command: Supported 00:26:19.893 Set Features Save Field: Not Supported 00:26:19.893 Reservations: Not Supported 00:26:19.893 Timestamp: Not Supported 00:26:19.893 Copy: Not Supported 00:26:19.893 Volatile Write Cache: Present 00:26:19.893 Atomic Write Unit (Normal): 1 00:26:19.893 Atomic Write Unit (PFail): 1 00:26:19.893 Atomic Compare & Write Unit: 1 00:26:19.893 Fused Compare & Write: Not Supported 00:26:19.893 Scatter-Gather List 00:26:19.893 SGL Command Set: Supported 00:26:19.893 SGL Keyed: Not Supported 00:26:19.893 SGL Bit Bucket Descriptor: Not Supported 00:26:19.893 SGL Metadata Pointer: Not Supported 00:26:19.893 Oversized SGL: Not Supported 00:26:19.893 SGL Metadata Address: Not Supported 00:26:19.893 SGL Offset: Supported 00:26:19.893 Transport SGL Data Block: Not Supported 00:26:19.893 Replay Protected Memory Block: Not Supported 00:26:19.893 00:26:19.893 Firmware Slot Information 00:26:19.893 ========================= 00:26:19.893 Active slot: 0 00:26:19.893 00:26:19.893 Asymmetric Namespace Access 00:26:19.893 =========================== 00:26:19.893 Change Count : 0 00:26:19.893 Number of ANA Group Descriptors : 1 00:26:19.893 ANA Group Descriptor : 0 00:26:19.893 ANA Group ID : 1 00:26:19.893 Number of NSID Values : 1 00:26:19.893 Change Count : 0 00:26:19.893 ANA State : 1 00:26:19.893 Namespace Identifier : 1 00:26:19.893 00:26:19.893 Commands Supported and Effects 00:26:19.893 ============================== 00:26:19.893 Admin Commands 00:26:19.893 -------------- 00:26:19.893 Get Log Page (02h): Supported 00:26:19.893 Identify (06h): Supported 00:26:19.893 Abort (08h): Supported 00:26:19.893 Set Features (09h): Supported 00:26:19.893 Get Features (0Ah): Supported 00:26:19.893 Asynchronous Event Request (0Ch): Supported 00:26:19.893 Keep Alive (18h): Supported 00:26:19.893 I/O Commands 00:26:19.893 ------------ 00:26:19.893 Flush (00h): Supported 00:26:19.893 Write (01h): Supported LBA-Change 00:26:19.893 Read (02h): Supported 00:26:19.893 Write Zeroes (08h): Supported LBA-Change 00:26:19.893 Dataset Management (09h): Supported 00:26:19.893 00:26:19.893 Error Log 00:26:19.893 ========= 00:26:19.893 Entry: 0 00:26:19.893 Error Count: 0x3 00:26:19.893 Submission Queue Id: 0x0 00:26:19.893 Command Id: 0x5 00:26:19.893 Phase Bit: 0 00:26:19.893 Status Code: 0x2 00:26:19.893 Status Code Type: 0x0 00:26:19.893 Do Not Retry: 1 00:26:19.893 Error Location: 0x28 00:26:19.893 LBA: 0x0 00:26:19.893 Namespace: 0x0 00:26:19.893 Vendor Log Page: 0x0 00:26:19.893 ----------- 00:26:19.893 Entry: 1 00:26:19.893 Error Count: 0x2 00:26:19.893 Submission Queue Id: 0x0 00:26:19.893 Command Id: 0x5 00:26:19.893 Phase Bit: 0 00:26:19.893 Status Code: 0x2 00:26:19.893 Status Code Type: 0x0 00:26:19.893 Do Not Retry: 1 00:26:19.893 Error Location: 0x28 00:26:19.893 LBA: 0x0 00:26:19.893 Namespace: 0x0 00:26:19.893 Vendor Log Page: 0x0 00:26:19.893 ----------- 00:26:19.893 Entry: 2 00:26:19.893 Error Count: 0x1 00:26:19.893 Submission Queue Id: 0x0 00:26:19.893 Command Id: 0x4 00:26:19.893 Phase Bit: 0 00:26:19.893 Status Code: 0x2 00:26:19.893 Status Code Type: 0x0 00:26:19.893 Do Not Retry: 1 00:26:19.893 Error Location: 0x28 00:26:19.893 LBA: 0x0 00:26:19.893 Namespace: 0x0 00:26:19.894 Vendor Log Page: 0x0 00:26:19.894 00:26:19.894 Number of Queues 00:26:19.894 ================ 00:26:19.894 Number of I/O Submission Queues: 128 00:26:19.894 Number of I/O Completion Queues: 128 00:26:19.894 00:26:19.894 ZNS Specific Controller Data 00:26:19.894 ============================ 00:26:19.894 Zone Append Size Limit: 0 00:26:19.894 00:26:19.894 00:26:19.894 Active Namespaces 00:26:19.894 ================= 00:26:19.894 get_feature(0x05) failed 00:26:19.894 Namespace ID:1 00:26:19.894 Command Set Identifier: NVM (00h) 00:26:19.894 Deallocate: Supported 00:26:19.894 Deallocated/Unwritten Error: Not Supported 00:26:19.894 Deallocated Read Value: Unknown 00:26:19.894 Deallocate in Write Zeroes: Not Supported 00:26:19.894 Deallocated Guard Field: 0xFFFF 00:26:19.894 Flush: Supported 00:26:19.894 Reservation: Not Supported 00:26:19.894 Namespace Sharing Capabilities: Multiple Controllers 00:26:19.894 Size (in LBAs): 3125627568 (1490GiB) 00:26:19.894 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:19.894 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:19.894 UUID: 11513e3a-000c-4dce-a262-9d0cf8c6f222 00:26:19.894 Thin Provisioning: Not Supported 00:26:19.894 Per-NS Atomic Units: Yes 00:26:19.894 Atomic Boundary Size (Normal): 0 00:26:19.894 Atomic Boundary Size (PFail): 0 00:26:19.894 Atomic Boundary Offset: 0 00:26:19.894 NGUID/EUI64 Never Reused: No 00:26:19.894 ANA group ID: 1 00:26:19.894 Namespace Write Protected: No 00:26:19.894 Number of LBA Formats: 1 00:26:19.894 Current LBA Format: LBA Format #00 00:26:19.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.894 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.894 rmmod nvme_tcp 00:26:19.894 rmmod nvme_fabrics 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.894 15:43:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:22.424 15:43:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:24.960 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:24.960 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:26.339 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:26.597 00:26:26.597 real 0m17.316s 00:26:26.597 user 0m4.330s 00:26:26.597 sys 0m8.745s 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.597 ************************************ 00:26:26.597 END TEST nvmf_identify_kernel_target 00:26:26.597 ************************************ 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.597 ************************************ 00:26:26.597 START TEST nvmf_auth_host 00:26:26.597 ************************************ 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:26.597 * Looking for test storage... 00:26:26.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:26.597 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.855 --rc genhtml_branch_coverage=1 00:26:26.855 --rc genhtml_function_coverage=1 00:26:26.855 --rc genhtml_legend=1 00:26:26.855 --rc geninfo_all_blocks=1 00:26:26.855 --rc geninfo_unexecuted_blocks=1 00:26:26.855 00:26:26.855 ' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.855 --rc genhtml_branch_coverage=1 00:26:26.855 --rc genhtml_function_coverage=1 00:26:26.855 --rc genhtml_legend=1 00:26:26.855 --rc geninfo_all_blocks=1 00:26:26.855 --rc geninfo_unexecuted_blocks=1 00:26:26.855 00:26:26.855 ' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.855 --rc genhtml_branch_coverage=1 00:26:26.855 --rc genhtml_function_coverage=1 00:26:26.855 --rc genhtml_legend=1 00:26:26.855 --rc geninfo_all_blocks=1 00:26:26.855 --rc geninfo_unexecuted_blocks=1 00:26:26.855 00:26:26.855 ' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.855 --rc genhtml_branch_coverage=1 00:26:26.855 --rc genhtml_function_coverage=1 00:26:26.855 --rc genhtml_legend=1 00:26:26.855 --rc geninfo_all_blocks=1 00:26:26.855 --rc geninfo_unexecuted_blocks=1 00:26:26.855 00:26:26.855 ' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.855 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:26.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:26.856 15:43:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:33.522 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:33.522 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:33.522 Found net devices under 0000:86:00.0: cvl_0_0 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:33.522 Found net devices under 0000:86:00.1: cvl_0_1 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.522 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:33.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:26:33.523 00:26:33.523 --- 10.0.0.2 ping statistics --- 00:26:33.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.523 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:26:33.523 00:26:33.523 --- 10.0.0.1 ping statistics --- 00:26:33.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.523 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3141761 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3141761 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3141761 ']' 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3cc26c34797f656ce5d08a386af8a022 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dUk 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3cc26c34797f656ce5d08a386af8a022 0 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3cc26c34797f656ce5d08a386af8a022 0 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3cc26c34797f656ce5d08a386af8a022 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dUk 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dUk 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dUk 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1c21960448ab2e3d1a7a5384336da31d061936a8768ae510a95b13fb7acf8248 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SVv 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1c21960448ab2e3d1a7a5384336da31d061936a8768ae510a95b13fb7acf8248 3 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1c21960448ab2e3d1a7a5384336da31d061936a8768ae510a95b13fb7acf8248 3 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1c21960448ab2e3d1a7a5384336da31d061936a8768ae510a95b13fb7acf8248 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:33.523 15:43:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SVv 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SVv 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SVv 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cf8e4471c7f5ba7f3b387208eb75e065f40da00388339c4a 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7zm 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cf8e4471c7f5ba7f3b387208eb75e065f40da00388339c4a 0 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cf8e4471c7f5ba7f3b387208eb75e065f40da00388339c4a 0 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cf8e4471c7f5ba7f3b387208eb75e065f40da00388339c4a 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7zm 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7zm 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7zm 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:33.523 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=54ff6ebe5cc21dfdc18a8b2bcd7132b4e0dcb5a8f8a4fb64 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fQ2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 54ff6ebe5cc21dfdc18a8b2bcd7132b4e0dcb5a8f8a4fb64 2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 54ff6ebe5cc21dfdc18a8b2bcd7132b4e0dcb5a8f8a4fb64 2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=54ff6ebe5cc21dfdc18a8b2bcd7132b4e0dcb5a8f8a4fb64 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fQ2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fQ2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fQ2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=056f2983765f4b518ede8dc075a71a46 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WEB 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 056f2983765f4b518ede8dc075a71a46 1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 056f2983765f4b518ede8dc075a71a46 1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=056f2983765f4b518ede8dc075a71a46 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WEB 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WEB 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.WEB 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b64fc57975b61f8d1e69f60595b2e3b 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7F7 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b64fc57975b61f8d1e69f60595b2e3b 1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b64fc57975b61f8d1e69f60595b2e3b 1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b64fc57975b61f8d1e69f60595b2e3b 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7F7 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7F7 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7F7 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=301f3c97326c2eb6f549a7f477378e2352cecffbc2f44736 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zHb 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 301f3c97326c2eb6f549a7f477378e2352cecffbc2f44736 2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 301f3c97326c2eb6f549a7f477378e2352cecffbc2f44736 2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=301f3c97326c2eb6f549a7f477378e2352cecffbc2f44736 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zHb 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zHb 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zHb 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=45559554ea71ea21e2d1e531b8ca3039 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rrk 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 45559554ea71ea21e2d1e531b8ca3039 0 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 45559554ea71ea21e2d1e531b8ca3039 0 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=45559554ea71ea21e2d1e531b8ca3039 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rrk 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rrk 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rrk 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7260aee2a7e06f99a97c87193d215272aa2b856ac3c585846094a323b3f3ad3e 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.X3o 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7260aee2a7e06f99a97c87193d215272aa2b856ac3c585846094a323b3f3ad3e 3 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7260aee2a7e06f99a97c87193d215272aa2b856ac3c585846094a323b3f3ad3e 3 00:26:33.524 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7260aee2a7e06f99a97c87193d215272aa2b856ac3c585846094a323b3f3ad3e 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.X3o 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.X3o 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.X3o 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3141761 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3141761 ']' 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.525 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dUk 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SVv ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SVv 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7zm 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fQ2 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fQ2 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.WEB 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7F7 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7F7 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zHb 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rrk ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rrk 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.X3o 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:33.785 15:43:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:37.074 Waiting for block devices as requested 00:26:37.074 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:37.074 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:37.074 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:37.074 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:37.074 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:37.075 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:37.075 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:37.075 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:37.075 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:37.334 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:37.334 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:37.334 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:37.334 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:37.593 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:37.593 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:37.593 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:37.852 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:38.421 No valid GPT data, bailing 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:38.421 00:26:38.421 Discovery Log Number of Records 2, Generation counter 2 00:26:38.421 =====Discovery Log Entry 0====== 00:26:38.421 trtype: tcp 00:26:38.421 adrfam: ipv4 00:26:38.421 subtype: current discovery subsystem 00:26:38.421 treq: not specified, sq flow control disable supported 00:26:38.421 portid: 1 00:26:38.421 trsvcid: 4420 00:26:38.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:38.421 traddr: 10.0.0.1 00:26:38.421 eflags: none 00:26:38.421 sectype: none 00:26:38.421 =====Discovery Log Entry 1====== 00:26:38.421 trtype: tcp 00:26:38.421 adrfam: ipv4 00:26:38.421 subtype: nvme subsystem 00:26:38.421 treq: not specified, sq flow control disable supported 00:26:38.421 portid: 1 00:26:38.421 trsvcid: 4420 00:26:38.421 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:38.421 traddr: 10.0.0.1 00:26:38.421 eflags: none 00:26:38.421 sectype: none 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.421 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.682 nvme0n1 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.682 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.942 nvme0n1 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.942 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.222 nvme0n1 00:26:39.222 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.222 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.222 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.222 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.222 15:43:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.222 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.223 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.483 nvme0n1 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.483 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.484 nvme0n1 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.484 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.743 nvme0n1 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.743 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.003 nvme0n1 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.003 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.263 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.263 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.263 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.263 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:40.263 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.263 15:43:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 nvme0n1 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.263 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.522 nvme0n1 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.522 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.523 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.784 nvme0n1 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.784 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.043 nvme0n1 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.043 15:43:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.043 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.044 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.303 nvme0n1 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.303 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.563 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.823 nvme0n1 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.823 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.082 nvme0n1 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.082 15:43:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 nvme0n1 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.342 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.602 nvme0n1 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.602 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.861 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.862 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.121 nvme0n1 00:26:43.121 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.121 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.121 15:43:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.121 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.687 nvme0n1 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.687 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.688 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.946 nvme0n1 00:26:43.946 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.946 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.946 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.946 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.946 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.946 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.203 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.204 15:43:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.461 nvme0n1 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:44.461 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.462 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.720 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.979 nvme0n1 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.979 15:43:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.546 nvme0n1 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.546 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.805 15:43:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.372 nvme0n1 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.372 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.373 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.940 nvme0n1 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.940 15:43:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.509 nvme0n1 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.509 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.768 15:43:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.336 nvme0n1 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:48.336 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.337 nvme0n1 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.337 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.596 nvme0n1 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.596 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.855 nvme0n1 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.855 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.116 nvme0n1 00:26:49.116 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.116 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.116 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.116 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.116 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.116 15:43:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.116 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.375 nvme0n1 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:49.375 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.376 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.635 nvme0n1 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.635 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.894 nvme0n1 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.894 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.153 nvme0n1 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.153 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.154 15:43:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.154 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.413 nvme0n1 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.413 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.673 nvme0n1 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.673 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.674 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 nvme0n1 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.933 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.934 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.934 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.934 15:43:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 nvme0n1 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.193 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.451 nvme0n1 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.451 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.710 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 nvme0n1 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.969 15:43:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 nvme0n1 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.228 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.797 nvme0n1 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.797 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.056 nvme0n1 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.056 15:43:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:53.056 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.057 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 nvme0n1 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.625 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 nvme0n1 00:26:53.884 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.884 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.884 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.884 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.884 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.884 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.143 15:43:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.403 nvme0n1 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.403 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.971 nvme0n1 00:26:54.971 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.971 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.971 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.971 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.971 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.971 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.230 15:44:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.230 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.799 nvme0n1 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.799 15:44:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.366 nvme0n1 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.366 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.933 nvme0n1 00:26:56.933 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.934 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.934 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.934 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.934 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.193 15:44:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.762 nvme0n1 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.762 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.021 nvme0n1 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.021 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.022 15:44:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.022 nvme0n1 00:26:58.022 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.281 nvme0n1 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.281 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.541 nvme0n1 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:58.541 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.801 nvme0n1 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.801 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.802 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.061 nvme0n1 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.061 15:44:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.061 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.319 nvme0n1 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.319 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.320 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.320 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.320 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.320 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.320 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.320 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.578 nvme0n1 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.578 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.837 nvme0n1 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.837 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.096 nvme0n1 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.096 15:44:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.096 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.097 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.097 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.097 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.097 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.097 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.356 nvme0n1 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.356 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.615 nvme0n1 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.615 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.875 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.876 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.876 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:00.876 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.876 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.135 nvme0n1 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.135 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.136 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:01.136 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.136 15:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.136 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.395 nvme0n1 00:27:01.395 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.395 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.396 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.655 nvme0n1 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.655 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.914 15:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.173 nvme0n1 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.173 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.174 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.744 nvme0n1 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.744 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.004 nvme0n1 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.004 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.263 15:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.263 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.263 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.523 nvme0n1 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.523 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.092 nvme0n1 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2NjMjZjMzQ3OTdmNjU2Y2U1ZDA4YTM4NmFmOGEwMjJanE5P: 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWMyMTk2MDQ0OGFiMmUzZDFhN2E1Mzg0MzM2ZGEzMWQwNjE5MzZhODc2OGFlNTEwYTk1YjEzZmI3YWNmODI0ODyiohw=: 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.092 15:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.660 nvme0n1 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.660 15:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.229 nvme0n1 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.229 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.798 nvme0n1 00:27:05.798 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.798 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.798 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.798 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.798 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.798 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzAxZjNjOTczMjZjMmViNmY1NDlhN2Y0NzczNzhlMjM1MmNlY2ZmYmMyZjQ0NzM26VD3Gg==: 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDU1NTk1NTRlYTcxZWEyMWUyZDFlNTMxYjhjYTMwMzlo5+4y: 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.058 15:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.625 nvme0n1 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzI2MGFlZTJhN2UwNmY5OWE5N2M4NzE5M2QyMTUyNzJhYTJiODU2YWMzYzU4NTg0NjA5NGEzMjNiM2YzYWQzZVQhjB8=: 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.625 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.626 15:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.194 nvme0n1 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.194 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.195 request: 00:27:07.195 { 00:27:07.195 "name": "nvme0", 00:27:07.195 "trtype": "tcp", 00:27:07.195 "traddr": "10.0.0.1", 00:27:07.195 "adrfam": "ipv4", 00:27:07.195 "trsvcid": "4420", 00:27:07.195 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:07.195 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:07.195 "prchk_reftag": false, 00:27:07.195 "prchk_guard": false, 00:27:07.195 "hdgst": false, 00:27:07.195 "ddgst": false, 00:27:07.195 "allow_unrecognized_csi": false, 00:27:07.195 "method": "bdev_nvme_attach_controller", 00:27:07.195 "req_id": 1 00:27:07.195 } 00:27:07.195 Got JSON-RPC error response 00:27:07.195 response: 00:27:07.195 { 00:27:07.195 "code": -5, 00:27:07.195 "message": "Input/output error" 00:27:07.195 } 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.195 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.454 request: 00:27:07.454 { 00:27:07.454 "name": "nvme0", 00:27:07.454 "trtype": "tcp", 00:27:07.454 "traddr": "10.0.0.1", 00:27:07.454 "adrfam": "ipv4", 00:27:07.454 "trsvcid": "4420", 00:27:07.454 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:07.454 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:07.454 "prchk_reftag": false, 00:27:07.454 "prchk_guard": false, 00:27:07.454 "hdgst": false, 00:27:07.454 "ddgst": false, 00:27:07.454 "dhchap_key": "key2", 00:27:07.454 "allow_unrecognized_csi": false, 00:27:07.454 "method": "bdev_nvme_attach_controller", 00:27:07.454 "req_id": 1 00:27:07.454 } 00:27:07.454 Got JSON-RPC error response 00:27:07.454 response: 00:27:07.454 { 00:27:07.454 "code": -5, 00:27:07.454 "message": "Input/output error" 00:27:07.454 } 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:07.454 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.455 request: 00:27:07.455 { 00:27:07.455 "name": "nvme0", 00:27:07.455 "trtype": "tcp", 00:27:07.455 "traddr": "10.0.0.1", 00:27:07.455 "adrfam": "ipv4", 00:27:07.455 "trsvcid": "4420", 00:27:07.455 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:07.455 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:07.455 "prchk_reftag": false, 00:27:07.455 "prchk_guard": false, 00:27:07.455 "hdgst": false, 00:27:07.455 "ddgst": false, 00:27:07.455 "dhchap_key": "key1", 00:27:07.455 "dhchap_ctrlr_key": "ckey2", 00:27:07.455 "allow_unrecognized_csi": false, 00:27:07.455 "method": "bdev_nvme_attach_controller", 00:27:07.455 "req_id": 1 00:27:07.455 } 00:27:07.455 Got JSON-RPC error response 00:27:07.455 response: 00:27:07.455 { 00:27:07.455 "code": -5, 00:27:07.455 "message": "Input/output error" 00:27:07.455 } 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.455 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.712 nvme0n1 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.712 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:07.713 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.713 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.971 request: 00:27:07.971 { 00:27:07.971 "name": "nvme0", 00:27:07.971 "dhchap_key": "key1", 00:27:07.971 "dhchap_ctrlr_key": "ckey2", 00:27:07.971 "method": "bdev_nvme_set_keys", 00:27:07.971 "req_id": 1 00:27:07.971 } 00:27:07.971 Got JSON-RPC error response 00:27:07.971 response: 00:27:07.971 { 00:27:07.971 "code": -13, 00:27:07.971 "message": "Permission denied" 00:27:07.971 } 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:07.971 15:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y4ZTQ0NzFjN2Y1YmE3ZjNiMzg3MjA4ZWI3NWUwNjVmNDBkYTAwMzg4MzM5YzRhN4Thpw==: 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: ]] 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTRmZjZlYmU1Y2MyMWRmZGMxOGE4YjJiY2Q3MTMyYjRlMGRjYjVhOGY4YTRmYjY0mX+bww==: 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.908 15:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.254 nvme0n1 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU2ZjI5ODM3NjVmNGI1MThlZGU4ZGMwNzVhNzFhNDY2pthm: 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: ]] 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2I2NGZjNTc5NzViNjFmOGQxZTY5ZjYwNTk1YjJlM2LiZowY: 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.254 request: 00:27:09.254 { 00:27:09.254 "name": "nvme0", 00:27:09.254 "dhchap_key": "key2", 00:27:09.254 "dhchap_ctrlr_key": "ckey1", 00:27:09.254 "method": "bdev_nvme_set_keys", 00:27:09.254 "req_id": 1 00:27:09.254 } 00:27:09.254 Got JSON-RPC error response 00:27:09.254 response: 00:27:09.254 { 00:27:09.254 "code": -13, 00:27:09.254 "message": "Permission denied" 00:27:09.254 } 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.254 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.255 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.255 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:09.255 15:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.217 rmmod nvme_tcp 00:27:10.217 rmmod nvme_fabrics 00:27:10.217 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3141761 ']' 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3141761 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3141761 ']' 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3141761 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3141761 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3141761' 00:27:10.477 killing process with pid 3141761 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3141761 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3141761 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.477 15:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:13.007 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:13.008 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.008 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:13.008 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:13.008 15:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.538 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:15.538 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:16.911 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:17.171 15:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dUk /tmp/spdk.key-null.7zm /tmp/spdk.key-sha256.WEB /tmp/spdk.key-sha384.zHb /tmp/spdk.key-sha512.X3o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:17.171 15:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:19.706 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:19.706 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:19.706 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:19.965 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:19.965 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:19.965 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:19.965 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:19.965 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:19.965 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:19.965 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:19.965 00:27:19.965 real 0m53.407s 00:27:19.965 user 0m47.542s 00:27:19.965 sys 0m12.616s 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.965 ************************************ 00:27:19.965 END TEST nvmf_auth_host 00:27:19.965 ************************************ 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.965 ************************************ 00:27:19.965 START TEST nvmf_digest 00:27:19.965 ************************************ 00:27:19.965 15:44:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:20.225 * Looking for test storage... 00:27:20.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:20.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.225 --rc genhtml_branch_coverage=1 00:27:20.225 --rc genhtml_function_coverage=1 00:27:20.225 --rc genhtml_legend=1 00:27:20.225 --rc geninfo_all_blocks=1 00:27:20.225 --rc geninfo_unexecuted_blocks=1 00:27:20.225 00:27:20.225 ' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:20.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.225 --rc genhtml_branch_coverage=1 00:27:20.225 --rc genhtml_function_coverage=1 00:27:20.225 --rc genhtml_legend=1 00:27:20.225 --rc geninfo_all_blocks=1 00:27:20.225 --rc geninfo_unexecuted_blocks=1 00:27:20.225 00:27:20.225 ' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:20.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.225 --rc genhtml_branch_coverage=1 00:27:20.225 --rc genhtml_function_coverage=1 00:27:20.225 --rc genhtml_legend=1 00:27:20.225 --rc geninfo_all_blocks=1 00:27:20.225 --rc geninfo_unexecuted_blocks=1 00:27:20.225 00:27:20.225 ' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:20.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:20.225 --rc genhtml_branch_coverage=1 00:27:20.225 --rc genhtml_function_coverage=1 00:27:20.225 --rc genhtml_legend=1 00:27:20.225 --rc geninfo_all_blocks=1 00:27:20.225 --rc geninfo_unexecuted_blocks=1 00:27:20.225 00:27:20.225 ' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:20.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:20.225 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.226 15:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:26.799 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:26.799 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:26.799 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:26.800 Found net devices under 0000:86:00.0: cvl_0_0 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:26.800 Found net devices under 0000:86:00.1: cvl_0_1 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:26.800 15:44:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:26.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:27:26.800 00:27:26.800 --- 10.0.0.2 ping statistics --- 00:27:26.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.800 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:27:26.800 00:27:26.800 --- 10.0.0.1 ping statistics --- 00:27:26.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.800 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:26.800 ************************************ 00:27:26.800 START TEST nvmf_digest_clean 00:27:26.800 ************************************ 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:26.800 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3155534 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3155534 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3155534 ']' 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.801 [2024-12-06 15:44:32.170496] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:26.801 [2024-12-06 15:44:32.170540] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.801 [2024-12-06 15:44:32.249324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.801 [2024-12-06 15:44:32.289276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.801 [2024-12-06 15:44:32.289312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.801 [2024-12-06 15:44:32.289319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.801 [2024-12-06 15:44:32.289328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.801 [2024-12-06 15:44:32.289332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.801 [2024-12-06 15:44:32.289854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.801 null0 00:27:26.801 [2024-12-06 15:44:32.437953] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.801 [2024-12-06 15:44:32.462147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3155563 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3155563 /var/tmp/bperf.sock 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3155563 ']' 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:26.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:26.801 [2024-12-06 15:44:32.512878] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:26.801 [2024-12-06 15:44:32.512924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155563 ] 00:27:26.801 [2024-12-06 15:44:32.586218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.801 [2024-12-06 15:44:32.628731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:26.801 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:27.060 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.060 15:44:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.318 nvme0n1 00:27:27.318 15:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:27.318 15:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.318 Running I/O for 2 seconds... 00:27:29.687 25463.00 IOPS, 99.46 MiB/s [2024-12-06T14:44:35.685Z] 25679.50 IOPS, 100.31 MiB/s 00:27:29.687 Latency(us) 00:27:29.687 [2024-12-06T14:44:35.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.687 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:29.687 nvme0n1 : 2.00 25700.41 100.39 0.00 0.00 4975.50 2512.21 13044.78 00:27:29.687 [2024-12-06T14:44:35.685Z] =================================================================================================================== 00:27:29.687 [2024-12-06T14:44:35.685Z] Total : 25700.41 100.39 0.00 0.00 4975.50 2512.21 13044.78 00:27:29.687 { 00:27:29.687 "results": [ 00:27:29.687 { 00:27:29.687 "job": "nvme0n1", 00:27:29.687 "core_mask": "0x2", 00:27:29.687 "workload": "randread", 00:27:29.687 "status": "finished", 00:27:29.687 "queue_depth": 128, 00:27:29.687 "io_size": 4096, 00:27:29.687 "runtime": 2.003353, 00:27:29.687 "iops": 25700.413257174347, 00:27:29.687 "mibps": 100.39223928583729, 00:27:29.687 "io_failed": 0, 00:27:29.687 "io_timeout": 0, 00:27:29.687 "avg_latency_us": 4975.49819085169, 00:27:29.687 "min_latency_us": 2512.213333333333, 00:27:29.687 "max_latency_us": 13044.784761904762 00:27:29.687 } 00:27:29.687 ], 00:27:29.687 "core_count": 1 00:27:29.687 } 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:29.687 | select(.opcode=="crc32c") 00:27:29.687 | "\(.module_name) \(.executed)"' 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3155563 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3155563 ']' 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3155563 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3155563 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3155563' 00:27:29.687 killing process with pid 3155563 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3155563 00:27:29.687 Received shutdown signal, test time was about 2.000000 seconds 00:27:29.687 00:27:29.687 Latency(us) 00:27:29.687 [2024-12-06T14:44:35.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.687 [2024-12-06T14:44:35.685Z] =================================================================================================================== 00:27:29.687 [2024-12-06T14:44:35.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.687 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3155563 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3156034 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3156034 /var/tmp/bperf.sock 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3156034 ']' 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:29.958 [2024-12-06 15:44:35.741635] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:29.958 [2024-12-06 15:44:35.741686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156034 ] 00:27:29.958 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.958 Zero copy mechanism will not be used. 00:27:29.958 [2024-12-06 15:44:35.816970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.958 [2024-12-06 15:44:35.853822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:29.958 15:44:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:30.217 15:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.217 15:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.474 nvme0n1 00:27:30.474 15:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:30.474 15:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:30.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:30.733 Zero copy mechanism will not be used. 00:27:30.733 Running I/O for 2 seconds... 00:27:32.606 5638.00 IOPS, 704.75 MiB/s [2024-12-06T14:44:38.604Z] 5762.00 IOPS, 720.25 MiB/s 00:27:32.606 Latency(us) 00:27:32.606 [2024-12-06T14:44:38.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.606 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:32.606 nvme0n1 : 2.00 5761.50 720.19 0.00 0.00 2774.25 624.15 6428.77 00:27:32.606 [2024-12-06T14:44:38.604Z] =================================================================================================================== 00:27:32.606 [2024-12-06T14:44:38.604Z] Total : 5761.50 720.19 0.00 0.00 2774.25 624.15 6428.77 00:27:32.606 { 00:27:32.606 "results": [ 00:27:32.607 { 00:27:32.607 "job": "nvme0n1", 00:27:32.607 "core_mask": "0x2", 00:27:32.607 "workload": "randread", 00:27:32.607 "status": "finished", 00:27:32.607 "queue_depth": 16, 00:27:32.607 "io_size": 131072, 00:27:32.607 "runtime": 2.002952, 00:27:32.607 "iops": 5761.496031856979, 00:27:32.607 "mibps": 720.1870039821224, 00:27:32.607 "io_failed": 0, 00:27:32.607 "io_timeout": 0, 00:27:32.607 "avg_latency_us": 2774.2452739126848, 00:27:32.607 "min_latency_us": 624.152380952381, 00:27:32.607 "max_latency_us": 6428.769523809524 00:27:32.607 } 00:27:32.607 ], 00:27:32.607 "core_count": 1 00:27:32.607 } 00:27:32.607 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:32.607 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:32.607 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:32.607 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:32.607 | select(.opcode=="crc32c") 00:27:32.607 | "\(.module_name) \(.executed)"' 00:27:32.607 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3156034 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3156034 ']' 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3156034 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3156034 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3156034' 00:27:32.866 killing process with pid 3156034 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3156034 00:27:32.866 Received shutdown signal, test time was about 2.000000 seconds 00:27:32.866 00:27:32.866 Latency(us) 00:27:32.866 [2024-12-06T14:44:38.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.866 [2024-12-06T14:44:38.864Z] =================================================================================================================== 00:27:32.866 [2024-12-06T14:44:38.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:32.866 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3156034 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3156634 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3156634 /var/tmp/bperf.sock 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3156634 ']' 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.125 15:44:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:33.125 [2024-12-06 15:44:39.013788] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:33.125 [2024-12-06 15:44:39.013837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156634 ] 00:27:33.125 [2024-12-06 15:44:39.087577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.385 [2024-12-06 15:44:39.129958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.385 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.385 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:33.385 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:33.385 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:33.385 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:33.644 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.644 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.903 nvme0n1 00:27:33.903 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:33.903 15:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:33.903 Running I/O for 2 seconds... 00:27:36.217 27984.00 IOPS, 109.31 MiB/s [2024-12-06T14:44:42.215Z] 28340.00 IOPS, 110.70 MiB/s 00:27:36.217 Latency(us) 00:27:36.217 [2024-12-06T14:44:42.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.217 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:36.217 nvme0n1 : 2.00 28366.53 110.81 0.00 0.00 4506.49 1833.45 10111.27 00:27:36.217 [2024-12-06T14:44:42.215Z] =================================================================================================================== 00:27:36.217 [2024-12-06T14:44:42.215Z] Total : 28366.53 110.81 0.00 0.00 4506.49 1833.45 10111.27 00:27:36.217 { 00:27:36.217 "results": [ 00:27:36.217 { 00:27:36.217 "job": "nvme0n1", 00:27:36.217 "core_mask": "0x2", 00:27:36.217 "workload": "randwrite", 00:27:36.217 "status": "finished", 00:27:36.217 "queue_depth": 128, 00:27:36.217 "io_size": 4096, 00:27:36.217 "runtime": 2.004898, 00:27:36.217 "iops": 28366.530367130898, 00:27:36.217 "mibps": 110.80675924660507, 00:27:36.217 "io_failed": 0, 00:27:36.217 "io_timeout": 0, 00:27:36.218 "avg_latency_us": 4506.488881029413, 00:27:36.218 "min_latency_us": 1833.4476190476191, 00:27:36.218 "max_latency_us": 10111.26857142857 00:27:36.218 } 00:27:36.218 ], 00:27:36.218 "core_count": 1 00:27:36.218 } 00:27:36.218 15:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:36.218 15:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:36.218 15:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:36.218 15:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:36.218 | select(.opcode=="crc32c") 00:27:36.218 | "\(.module_name) \(.executed)"' 00:27:36.218 15:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3156634 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3156634 ']' 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3156634 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3156634 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3156634' 00:27:36.218 killing process with pid 3156634 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3156634 00:27:36.218 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.218 00:27:36.218 Latency(us) 00:27:36.218 [2024-12-06T14:44:42.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.218 [2024-12-06T14:44:42.216Z] =================================================================================================================== 00:27:36.218 [2024-12-06T14:44:42.216Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.218 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3156634 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3157191 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3157191 /var/tmp/bperf.sock 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3157191 ']' 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:36.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:36.476 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.476 [2024-12-06 15:44:42.363621] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:36.476 [2024-12-06 15:44:42.363667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157191 ] 00:27:36.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:36.477 Zero copy mechanism will not be used. 00:27:36.477 [2024-12-06 15:44:42.437376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.735 [2024-12-06 15:44:42.474079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.735 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.735 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:36.735 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:36.735 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:36.735 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:36.993 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:36.993 15:44:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:37.252 nvme0n1 00:27:37.252 15:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:37.252 15:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:37.252 Zero copy mechanism will not be used. 00:27:37.252 Running I/O for 2 seconds... 00:27:39.569 6138.00 IOPS, 767.25 MiB/s [2024-12-06T14:44:45.567Z] 6330.00 IOPS, 791.25 MiB/s 00:27:39.569 Latency(us) 00:27:39.569 [2024-12-06T14:44:45.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.569 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:39.569 nvme0n1 : 2.00 6327.99 791.00 0.00 0.00 2524.15 1849.05 8862.96 00:27:39.569 [2024-12-06T14:44:45.567Z] =================================================================================================================== 00:27:39.569 [2024-12-06T14:44:45.567Z] Total : 6327.99 791.00 0.00 0.00 2524.15 1849.05 8862.96 00:27:39.569 { 00:27:39.569 "results": [ 00:27:39.569 { 00:27:39.569 "job": "nvme0n1", 00:27:39.569 "core_mask": "0x2", 00:27:39.569 "workload": "randwrite", 00:27:39.569 "status": "finished", 00:27:39.569 "queue_depth": 16, 00:27:39.569 "io_size": 131072, 00:27:39.569 "runtime": 2.003005, 00:27:39.569 "iops": 6327.9921917319225, 00:27:39.569 "mibps": 790.9990239664903, 00:27:39.569 "io_failed": 0, 00:27:39.569 "io_timeout": 0, 00:27:39.569 "avg_latency_us": 2524.1488626655396, 00:27:39.569 "min_latency_us": 1849.0514285714285, 00:27:39.569 "max_latency_us": 8862.96380952381 00:27:39.569 } 00:27:39.569 ], 00:27:39.569 "core_count": 1 00:27:39.569 } 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:39.569 | select(.opcode=="crc32c") 00:27:39.569 | "\(.module_name) \(.executed)"' 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3157191 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3157191 ']' 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3157191 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157191 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157191' 00:27:39.569 killing process with pid 3157191 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3157191 00:27:39.569 Received shutdown signal, test time was about 2.000000 seconds 00:27:39.569 00:27:39.569 Latency(us) 00:27:39.569 [2024-12-06T14:44:45.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.569 [2024-12-06T14:44:45.567Z] =================================================================================================================== 00:27:39.569 [2024-12-06T14:44:45.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.569 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3157191 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3155534 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3155534 ']' 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3155534 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3155534 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3155534' 00:27:39.828 killing process with pid 3155534 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3155534 00:27:39.828 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3155534 00:27:40.086 00:27:40.086 real 0m13.799s 00:27:40.086 user 0m26.365s 00:27:40.086 sys 0m4.551s 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:40.086 ************************************ 00:27:40.086 END TEST nvmf_digest_clean 00:27:40.086 ************************************ 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:40.086 ************************************ 00:27:40.086 START TEST nvmf_digest_error 00:27:40.086 ************************************ 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3157752 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3157752 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3157752 ']' 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.086 15:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.086 [2024-12-06 15:44:46.041624] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:40.086 [2024-12-06 15:44:46.041667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.345 [2024-12-06 15:44:46.120833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.345 [2024-12-06 15:44:46.160064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.345 [2024-12-06 15:44:46.160098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.345 [2024-12-06 15:44:46.160107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.345 [2024-12-06 15:44:46.160113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.345 [2024-12-06 15:44:46.160119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.345 [2024-12-06 15:44:46.160693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.345 [2024-12-06 15:44:46.241161] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.345 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.345 null0 00:27:40.345 [2024-12-06 15:44:46.338209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.603 [2024-12-06 15:44:46.362415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3157927 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3157927 /var/tmp/bperf.sock 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3157927 ']' 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.603 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.603 [2024-12-06 15:44:46.415088] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:40.603 [2024-12-06 15:44:46.415132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157927 ] 00:27:40.603 [2024-12-06 15:44:46.489887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.603 [2024-12-06 15:44:46.529888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.877 15:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.136 nvme0n1 00:27:41.136 15:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:41.136 15:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.136 15:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.395 15:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.395 15:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:41.395 15:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.395 Running I/O for 2 seconds... 00:27:41.395 [2024-12-06 15:44:47.237732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.237764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.237775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.246890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.246915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.246924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.257675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.257698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.257707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.266305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.266326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.266338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.277021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.277042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.277050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.286110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.286131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.286139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.295848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.295872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.295881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.305203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.305224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.305233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.314373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.314393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.314402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.323537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.323560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.323567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.332583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.332604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.332613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.341872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.341895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.341903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.352314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.352341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.352350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.361445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.361466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.361474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.370647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.370668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.370676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.379119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.379140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.379148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.395 [2024-12-06 15:44:47.389092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.395 [2024-12-06 15:44:47.389114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.395 [2024-12-06 15:44:47.389122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.654 [2024-12-06 15:44:47.399528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.654 [2024-12-06 15:44:47.399551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.654 [2024-12-06 15:44:47.399559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.654 [2024-12-06 15:44:47.408931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.654 [2024-12-06 15:44:47.408952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.654 [2024-12-06 15:44:47.408960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.654 [2024-12-06 15:44:47.418233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.654 [2024-12-06 15:44:47.418254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.654 [2024-12-06 15:44:47.418262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.654 [2024-12-06 15:44:47.427226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.654 [2024-12-06 15:44:47.427247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.654 [2024-12-06 15:44:47.427255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.436437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.436459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.436467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.445181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.445203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.445211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.456827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.456848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.456856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.468511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.468533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.468541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.476664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.476686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.476694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.487856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.487877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.487885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.498233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.498255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.498263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.506756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.506779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.506787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.517585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.517605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.517617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.525816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.525836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.525844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.537216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.537237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.537246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.548478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.548499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.548508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.556796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.556818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.556826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.567491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.567512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.567520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.576107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.576141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.576149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.585484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.585505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.585513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.595685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.595707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.595716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.604001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.604028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.604037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.614517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.614538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.614547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.625544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.625567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.625576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.633767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.633789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.633797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.655 [2024-12-06 15:44:47.646088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.655 [2024-12-06 15:44:47.646110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.655 [2024-12-06 15:44:47.646119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.658073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.658095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.658103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.666970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.666991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.667000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.676565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.676585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.676593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.687114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.687134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.687142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.699031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.699051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.699060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.707449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.707470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.707478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.717005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.717025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.717033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.726330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.726351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.915 [2024-12-06 15:44:47.726359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.915 [2024-12-06 15:44:47.735659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.915 [2024-12-06 15:44:47.735679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.735687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.744802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.744823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.744831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.754454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.754473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.754481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.764925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.764946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.764954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.772934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.772954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.772967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.785915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.785936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.785944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.794238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.794259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.794267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.804826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.804846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.804855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.815144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.815165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.823441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.823461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.823469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.834752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.834773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.834781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.845410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.845431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.845439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.853577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.853597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.853605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.864186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.864207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.864215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.876692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.876713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.876722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.887855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.887875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.887884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.896475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.896496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.896504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.916 [2024-12-06 15:44:47.909207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:41.916 [2024-12-06 15:44:47.909228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.916 [2024-12-06 15:44:47.909236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.919041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.919062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.919070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.927590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.927611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.927619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.936868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.936887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.936895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.946484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.946504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.946516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.955187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.955208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.955217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.967036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.967057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.967066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.977793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.977813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.977821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.986747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.986767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.986775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:47.997561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:47.997582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:47.997590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:48.007893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:48.007914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:48.007922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:48.017115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:48.017137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:48.017146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:48.026293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:48.026314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:48.026322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:48.036967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:48.036991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:48.036999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:48.045145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:48.045165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.176 [2024-12-06 15:44:48.045174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.176 [2024-12-06 15:44:48.056767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.176 [2024-12-06 15:44:48.056788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.056797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.064977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.064998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.065006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.076726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.076747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.076755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.089066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.089087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.089096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.097341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.097362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.097377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.108128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.108148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.108156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.119199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.119220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.119229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.130077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.130098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.130106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.138568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.138589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.138597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.151123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.151152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.151160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.161462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.161483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.161491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.177 [2024-12-06 15:44:48.171001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.177 [2024-12-06 15:44:48.171022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.177 [2024-12-06 15:44:48.171031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.436 [2024-12-06 15:44:48.179585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.179613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.191344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.191365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.191378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.199225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.199245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.199253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.210443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.210464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.210475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 25406.00 IOPS, 99.24 MiB/s [2024-12-06T14:44:48.435Z] [2024-12-06 15:44:48.222532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.222553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.222562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.233665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.233687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.233695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.243378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.243399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.243407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.253295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.253315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.253323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.262470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.262490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.262498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.274037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.274058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.274066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.286101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.286122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.286130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.297215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.297236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.297245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.306109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.306133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.306141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.318453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.318476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.318484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.330580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.330600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.330608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.341035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.341056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.341063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.352725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.352746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.352754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.363187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.363207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.363215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.372440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.372461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.372469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.382144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.382164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.382173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.391990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.392011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.392019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.403865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.403886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.403894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.415493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.415513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.415521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.437 [2024-12-06 15:44:48.424500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.437 [2024-12-06 15:44:48.424522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.437 [2024-12-06 15:44:48.424530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.436231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.436253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.436261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.448923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.448944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.448953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.461643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.461664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.461672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.472670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.472689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.472697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.481135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.481154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.481162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.492455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.492480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.492488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.505042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.505063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.505072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.513711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.513732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.513741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.525286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.525307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.525315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.536611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.536631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.536639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.546819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.546839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.546848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.554950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.554970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.554978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.565034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.565054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.565062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.574786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.574807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.574815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.584960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.697 [2024-12-06 15:44:48.584981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.697 [2024-12-06 15:44:48.584989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.697 [2024-12-06 15:44:48.593443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.593463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.593471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.605072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.605093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.605101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.616056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.616078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.616087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.628433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.628455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.628464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.638143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.638163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.638171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.645864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.645886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.645895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.655362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.655389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.655400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.664978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.665000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.665013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.674177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.674197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.674205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-12-06 15:44:48.683643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.698 [2024-12-06 15:44:48.683663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.698 [2024-12-06 15:44:48.683671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.692722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.957 [2024-12-06 15:44:48.692743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.957 [2024-12-06 15:44:48.692751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.701653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.957 [2024-12-06 15:44:48.701673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.957 [2024-12-06 15:44:48.701682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.710704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.957 [2024-12-06 15:44:48.710724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.957 [2024-12-06 15:44:48.710732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.720407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.957 [2024-12-06 15:44:48.720430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.957 [2024-12-06 15:44:48.720439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.730050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.957 [2024-12-06 15:44:48.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.957 [2024-12-06 15:44:48.730079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.738236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.957 [2024-12-06 15:44:48.738257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.957 [2024-12-06 15:44:48.738265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.748873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.957 [2024-12-06 15:44:48.748900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.957 [2024-12-06 15:44:48.748908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.957 [2024-12-06 15:44:48.758327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.758348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.758357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.767725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.767747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.767755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.777065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.777088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.777097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.786287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.786308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.786316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.795508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.795536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.806989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.807010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.807019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.816182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.816203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.816211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.824846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.824867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.824875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.837423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.837443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.837452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.849792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.849813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.849821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.862284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.862305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.862313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.870254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.870275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.870283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.880358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.880385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.880393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.892569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.892590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.892599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.904665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.904686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.912847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.912869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.912877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.924635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.924656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.924669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.936581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.936603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.936611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.958 [2024-12-06 15:44:48.946520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:42.958 [2024-12-06 15:44:48.946541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.958 [2024-12-06 15:44:48.946550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:48.957711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:48.957734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:48.957742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:48.970281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:48.970304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:48.970312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:48.982819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:48.982841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:48.982849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:48.994461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:48.994482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:48.994491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.004324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.004345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.004354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.014091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.014113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.014121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.023472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.023498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.023506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.032864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.032885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.032894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.041546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.041568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.041577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.052026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.052047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.052056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.062340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.062361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.062375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.070751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.070772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.070780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.081280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.081301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.081310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.089992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.090013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.090021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.099375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.099397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.099405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.108801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.108822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.108830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.117732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.117753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.117761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.129170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.129190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.129198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.137643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.137664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.137673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.149205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.149226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.149234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.158778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.158799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.158807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.167875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.167896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.167904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.179338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.179359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.179372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.188748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.188771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.188779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.197082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.218 [2024-12-06 15:44:49.197103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.218 [2024-12-06 15:44:49.197111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.218 [2024-12-06 15:44:49.209920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.219 [2024-12-06 15:44:49.209942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.219 [2024-12-06 15:44:49.209950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.478 [2024-12-06 15:44:49.221462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdda2e0) 00:27:43.478 [2024-12-06 15:44:49.221483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.478 [2024-12-06 15:44:49.221491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:43.478 25111.00 IOPS, 98.09 MiB/s 00:27:43.478 Latency(us) 00:27:43.478 [2024-12-06T14:44:49.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.478 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:43.478 nvme0n1 : 2.00 25136.27 98.19 0.00 0.00 5086.98 2418.59 18350.08 00:27:43.478 [2024-12-06T14:44:49.476Z] =================================================================================================================== 00:27:43.478 [2024-12-06T14:44:49.476Z] Total : 25136.27 98.19 0.00 0.00 5086.98 2418.59 18350.08 00:27:43.478 { 00:27:43.478 "results": [ 00:27:43.478 { 00:27:43.478 "job": "nvme0n1", 00:27:43.478 "core_mask": "0x2", 00:27:43.478 "workload": "randread", 00:27:43.478 "status": "finished", 00:27:43.478 "queue_depth": 128, 00:27:43.478 "io_size": 4096, 00:27:43.478 "runtime": 2.003917, 00:27:43.478 "iops": 25136.270614002475, 00:27:43.478 "mibps": 98.18855708594717, 00:27:43.478 "io_failed": 0, 00:27:43.478 "io_timeout": 0, 00:27:43.478 "avg_latency_us": 5086.9839378100205, 00:27:43.478 "min_latency_us": 2418.5904761904762, 00:27:43.478 "max_latency_us": 18350.08 00:27:43.478 } 00:27:43.478 ], 00:27:43.478 "core_count": 1 00:27:43.478 } 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:43.478 | .driver_specific 00:27:43.478 | .nvme_error 00:27:43.478 | .status_code 00:27:43.478 | .command_transient_transport_error' 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3157927 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3157927 ']' 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3157927 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.478 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157927 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157927' 00:27:43.737 killing process with pid 3157927 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3157927 00:27:43.737 Received shutdown signal, test time was about 2.000000 seconds 00:27:43.737 00:27:43.737 Latency(us) 00:27:43.737 [2024-12-06T14:44:49.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.737 [2024-12-06T14:44:49.735Z] =================================================================================================================== 00:27:43.737 [2024-12-06T14:44:49.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3157927 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3158402 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3158402 /var/tmp/bperf.sock 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3158402 ']' 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.737 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.738 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.738 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.738 [2024-12-06 15:44:49.685876] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:43.738 [2024-12-06 15:44:49.685925] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158402 ] 00:27:43.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:43.738 Zero copy mechanism will not be used. 00:27:43.997 [2024-12-06 15:44:49.759287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.997 [2024-12-06 15:44:49.801609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.997 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.997 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:43.997 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:43.997 15:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:44.256 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:44.256 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.256 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.256 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.256 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.256 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.515 nvme0n1 00:27:44.515 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:44.515 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.775 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.775 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.775 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:44.775 15:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.775 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:44.775 Zero copy mechanism will not be used. 00:27:44.775 Running I/O for 2 seconds... 00:27:44.775 [2024-12-06 15:44:50.615860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.615894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.615905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.621226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.621252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.621262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.626632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.626656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.626665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.630203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.630224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.630234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.634341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.634364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.634379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.639691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.639714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.639723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.644817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.644838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.644847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.650123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.650146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.650155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.655488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.655511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.655519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.660767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.660789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.660797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.666047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.666069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.666077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.671541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.671563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.671572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.677051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.677073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.677085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.682545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.682568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.682576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.688233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.688257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.688265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.693897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.693920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.693928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.699218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.699240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.699249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.775 [2024-12-06 15:44:50.704521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.775 [2024-12-06 15:44:50.704545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.775 [2024-12-06 15:44:50.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.709963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.709989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.709998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.715345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.715374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.715383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.720880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.720903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.720911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.726164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.726190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.726199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.731501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.731522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.731530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.736991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.737013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.737021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.742313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.742334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.742342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.747842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.747865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.747876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.753327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.753349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.753358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.758636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.758659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.758667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.764284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.764306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.776 [2024-12-06 15:44:50.769655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:44.776 [2024-12-06 15:44:50.769677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.776 [2024-12-06 15:44:50.769685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.036 [2024-12-06 15:44:50.775091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.036 [2024-12-06 15:44:50.775114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.036 [2024-12-06 15:44:50.775123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.036 [2024-12-06 15:44:50.780387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.036 [2024-12-06 15:44:50.780409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.036 [2024-12-06 15:44:50.780417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.036 [2024-12-06 15:44:50.785678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.036 [2024-12-06 15:44:50.785700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.036 [2024-12-06 15:44:50.785708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.036 [2024-12-06 15:44:50.791053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.036 [2024-12-06 15:44:50.791075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.036 [2024-12-06 15:44:50.791084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.036 [2024-12-06 15:44:50.796363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.036 [2024-12-06 15:44:50.796391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.036 [2024-12-06 15:44:50.796399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.036 [2024-12-06 15:44:50.801686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.036 [2024-12-06 15:44:50.801707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.036 [2024-12-06 15:44:50.801716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.036 [2024-12-06 15:44:50.806691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.806713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.806722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.811686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.811708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.811718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.816695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.816717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.816731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.821618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.821639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.821647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.826699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.826722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.826729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.831809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.831831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.831840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.837025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.837048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.837056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.842199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.842220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.842228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.847463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.847484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.847492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.852767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.852789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.852797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.858140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.858162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.858171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.863681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.863706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.863715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.869270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.869293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.869300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.874698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.874721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.874730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.880140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.880162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.880171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.885395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.885416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.885424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.890789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.890810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.890818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.896214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.896237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.896245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.901426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.901448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.901456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.906736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.906758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.906766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.911975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.911998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.912006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.917426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.917448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.917457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.922862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.922885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.922893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.928150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.928172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.928181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.933537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.933559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.933568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.938796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.938817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.938825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.037 [2024-12-06 15:44:50.944087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.037 [2024-12-06 15:44:50.944108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.037 [2024-12-06 15:44:50.944116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.949440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.949462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.949470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.954833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.954861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.954872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.960058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.960079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.960088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.965434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.965456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.965464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.970812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.970834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.970842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.976210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.976232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.976241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.981606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.981627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.981635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.986957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.986979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.986988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.992300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.992322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.992331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:50.997574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:50.997597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:50.997606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:51.002838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:51.002859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:51.002868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:51.008156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:51.008178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:51.008186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:51.013389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:51.013411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:51.013420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:51.018748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:51.018770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:51.018778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:51.024212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:51.024235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:51.024243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.038 [2024-12-06 15:44:51.029695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.038 [2024-12-06 15:44:51.029718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.038 [2024-12-06 15:44:51.029726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.298 [2024-12-06 15:44:51.035142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.298 [2024-12-06 15:44:51.035164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.298 [2024-12-06 15:44:51.035172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.298 [2024-12-06 15:44:51.040848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.298 [2024-12-06 15:44:51.040870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.298 [2024-12-06 15:44:51.040878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.298 [2024-12-06 15:44:51.046224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.298 [2024-12-06 15:44:51.046246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.298 [2024-12-06 15:44:51.046257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.298 [2024-12-06 15:44:51.051539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.298 [2024-12-06 15:44:51.051560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.298 [2024-12-06 15:44:51.051569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.298 [2024-12-06 15:44:51.057330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.057352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.057361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.062489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.062511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.062519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.067693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.067715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.067723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.072840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.072862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.072871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.078015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.078037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.078045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.083183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.083204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.083212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.088338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.088360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.088374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.093504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.093525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.093538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.098642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.098664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.098671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.103830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.103852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.103860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.108972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.108995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.109003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.114158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.114179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.114188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.119347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.119375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.119385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.124688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.124710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.124718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.129978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.130000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.130011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.135214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.135236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.135244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.140356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.140383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.140392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.145595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.145617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.145628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.150786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.150808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.150817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.155889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.155911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.155919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.161035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.161057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.161065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.166194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.166215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.166224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.171322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.171344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.171353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.176463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.176485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.176493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.181586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.181608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.181620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.186745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.186767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.186775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.299 [2024-12-06 15:44:51.191872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.299 [2024-12-06 15:44:51.191896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.299 [2024-12-06 15:44:51.191903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.197080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.197103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.197111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.202237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.202259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.202268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.207418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.207440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.207448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.212530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.212552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.212561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.217616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.217638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.217647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.222738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.222760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.222769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.227929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.227955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.227963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.233065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.233087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.233095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.238236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.238258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.238266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.243442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.243464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.243472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.248614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.248636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.248644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.253832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.253854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.253863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.258956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.258978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.258986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.264000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.264021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.264030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.269093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.269115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.269123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.274151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.274172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.274180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.279364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.279393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.279401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.284554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.284575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.284583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.300 [2024-12-06 15:44:51.289775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.300 [2024-12-06 15:44:51.289796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.300 [2024-12-06 15:44:51.289804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.560 [2024-12-06 15:44:51.295011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.560 [2024-12-06 15:44:51.295033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.560 [2024-12-06 15:44:51.295043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.560 [2024-12-06 15:44:51.300249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.560 [2024-12-06 15:44:51.300283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.300292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.305491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.305512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.305522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.310668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.310689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.310698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.315837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.315858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.315870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.320968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.320989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.320998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.326146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.326167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.326177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.331299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.331322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.331330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.336477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.336499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.336508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.341620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.341654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.341664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.346838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.346861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.346869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.351981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.352004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.352014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.357147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.357169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.357177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.362360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.362390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.362399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.367591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.367613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.367621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.371925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.371948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.371956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.377060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.377082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.377091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.382201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.382223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.382233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.387229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.387250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.387258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.392184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.392206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.392214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.397218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.397240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.397248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.402331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.402352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.402360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.407597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.407619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.407627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.412824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.412846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.412854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.418013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.418035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.418043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.423227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.423249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.423257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.428376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.428398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.428406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.433546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.561 [2024-12-06 15:44:51.433568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.561 [2024-12-06 15:44:51.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.561 [2024-12-06 15:44:51.438715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.438737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.438745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.443950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.443972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.443980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.449173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.449195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.449208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.454382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.454404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.454412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.459555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.459577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.459586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.464734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.464756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.464764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.469946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.469968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.469976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.475103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.475124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.475133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.480296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.480319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.480327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.485510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.485532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.485541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.490730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.490752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.490760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.495947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.495974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.495983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.501142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.501164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.501172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.506313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.506335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.506344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.511495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.511515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.511524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.516665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.516687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.516695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.521824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.521846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.521854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.526942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.526963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.526972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.532106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.532128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.532136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.537274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.537296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.537305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.542497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.542520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.542528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.547720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.547742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.547750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.562 [2024-12-06 15:44:51.552945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.562 [2024-12-06 15:44:51.552967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.562 [2024-12-06 15:44:51.552976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.822 [2024-12-06 15:44:51.558215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.822 [2024-12-06 15:44:51.558237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.822 [2024-12-06 15:44:51.558245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.822 [2024-12-06 15:44:51.563447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.822 [2024-12-06 15:44:51.563470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.822 [2024-12-06 15:44:51.563478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.822 [2024-12-06 15:44:51.568682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.822 [2024-12-06 15:44:51.568704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.822 [2024-12-06 15:44:51.568712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.822 [2024-12-06 15:44:51.573842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.822 [2024-12-06 15:44:51.573864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.822 [2024-12-06 15:44:51.573872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.822 [2024-12-06 15:44:51.578967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.822 [2024-12-06 15:44:51.578989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.822 [2024-12-06 15:44:51.578997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.822 [2024-12-06 15:44:51.584151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.822 [2024-12-06 15:44:51.584173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.822 [2024-12-06 15:44:51.584185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.822 [2024-12-06 15:44:51.589331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.589354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.589362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.594489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.594511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.594520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.599639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.599662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.599670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.604781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.604803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.604810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.609950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.609973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.609982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.823 5902.00 IOPS, 737.75 MiB/s [2024-12-06T14:44:51.821Z] [2024-12-06 15:44:51.615824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.615847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.615856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.621020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.621043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.621051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.626174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.626197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.626206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.631394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.631416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.631424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.636716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.636740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.636749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.641964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.641986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.641995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.647149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.647173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.647182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.652426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.652448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.652456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.657638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.657660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.657668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.662826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.662848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.662857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.668010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.668032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.668042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.673144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.673166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.673177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.678329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.678351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.678359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.683540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.683563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.683572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.688829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.688852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.688861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.694055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.694077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.694085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.699314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.699337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.699346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.704515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.704537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.704546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.709721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.709743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.709751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.714907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.714929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.714937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.720121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.720148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.720156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.823 [2024-12-06 15:44:51.725316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.823 [2024-12-06 15:44:51.725339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.823 [2024-12-06 15:44:51.725346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.730562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.730584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.730592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.735760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.735784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.735793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.740930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.740957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.740965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.746156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.746180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.746189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.751329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.751352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.751361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.756549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.756572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.756581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.761737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.761760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.761768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.766980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.767003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.767012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.772199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.772221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.772230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.777412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.777434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.777444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.782584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.782607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.782615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.787761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.787783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.787791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.792939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.792961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.792970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.798109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.798131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.798140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.803313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.803335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.803343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.808398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.808427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.808439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.824 [2024-12-06 15:44:51.813544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:45.824 [2024-12-06 15:44:51.813566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.824 [2024-12-06 15:44:51.813574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.818799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.818821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.818831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.824115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.824148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.824155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.829323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.829347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.829355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.834492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.834514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.834523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.839672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.839702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.844850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.844872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.844881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.850130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.850152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.850160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.855342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.855376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.855386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.860544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.860567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.860576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.865811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.865833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.865842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.871034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.871057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.871065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.876235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.876257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.876265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.881472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.881495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.084 [2024-12-06 15:44:51.881505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.084 [2024-12-06 15:44:51.886782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.084 [2024-12-06 15:44:51.886804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.886813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.891995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.892018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.892027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.897143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.897165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.897174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.902361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.902390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.902398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.907541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.907564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.907572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.912718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.912740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.912748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.917937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.917959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.917967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.923143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.923165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.923174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.928309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.928332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.928341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.933477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.933499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.933508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.938691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.938713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.938722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.943939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.943960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.943973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.949322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.949346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.949356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.954711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.954735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.954745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.960085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.960109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.960118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.965438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.965461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.970837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.970861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.970870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.977024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.977046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.977056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.982854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.982877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.982886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.988208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.988230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.988239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.993563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.993590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.993600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:51.998888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:51.998912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:51.998922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:52.004236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:52.004260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:52.004269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:52.009686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:52.009709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:52.009719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:52.015071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:52.015094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:52.015103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:52.020380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:52.020403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:52.020412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:52.025623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:52.025645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.085 [2024-12-06 15:44:52.025654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.085 [2024-12-06 15:44:52.030956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.085 [2024-12-06 15:44:52.030978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.030986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.036181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.036203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.036211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.041423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.041445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.041453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.046651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.046673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.046683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.051863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.051885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.051893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.057256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.057279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.057287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.062468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.062490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.062498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.067656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.067677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.067686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.072804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.072826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.072833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.086 [2024-12-06 15:44:52.078003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.086 [2024-12-06 15:44:52.078025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.086 [2024-12-06 15:44:52.078033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.083206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.083228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.083240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.088374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.088412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.088420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.093524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.093546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.093554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.098689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.098710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.098719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.103941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.103964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.103973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.109249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.109272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.109280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.113848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.113869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.113878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.116780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.116801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.116809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.121950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.121971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.121979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.127114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.127134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.127142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.132411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.132433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.132441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.137264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.137286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.142468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.142490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.142498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.147459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.147482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.147490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.152496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.152517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.152526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.157571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.157593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.157601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.162708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.162730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.162739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.167941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.167963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.167977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.173145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.173167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.173175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.346 [2024-12-06 15:44:52.178352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.346 [2024-12-06 15:44:52.178380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.346 [2024-12-06 15:44:52.178389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.183649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.183672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.183680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.188930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.188952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.188960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.194324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.194345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.194353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.199703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.199725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.199734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.204608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.204631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.204639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.209955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.209977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.209985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.215141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.215166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.215174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.220155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.220178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.220187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.225228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.225249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.225258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.230378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.230399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.230407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.235758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.235780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.235789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.241101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.241123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.241131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.246477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.246499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.246507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.251812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.251834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.251842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.257194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.257217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.257225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.262532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.262555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.262564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.267308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.267330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.267338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.272666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.272687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.272695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.277797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.277818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.277826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.281282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.281304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.281313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.285010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.285032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.285040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.290194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.290215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.290222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.295329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.295351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.295359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.300421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.300442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.300453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.305736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.305758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.305766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.310977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.310998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.311006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.347 [2024-12-06 15:44:52.316048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.347 [2024-12-06 15:44:52.316069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.347 [2024-12-06 15:44:52.316077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.348 [2024-12-06 15:44:52.321310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.348 [2024-12-06 15:44:52.321332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.348 [2024-12-06 15:44:52.321341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.348 [2024-12-06 15:44:52.326944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.348 [2024-12-06 15:44:52.326966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.348 [2024-12-06 15:44:52.326975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.348 [2024-12-06 15:44:52.332450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.348 [2024-12-06 15:44:52.332473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.348 [2024-12-06 15:44:52.332481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.348 [2024-12-06 15:44:52.337304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.348 [2024-12-06 15:44:52.337326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.348 [2024-12-06 15:44:52.337335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.607 [2024-12-06 15:44:52.342375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.342398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.342406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.347432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.347459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.347467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.352601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.352622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.352630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.357871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.357892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.357901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.363232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.363253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.363261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.368570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.368593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.368601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.373831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.373854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.373862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.379258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.379279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.379287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.384779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.384800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.384808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.390306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.390328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.390336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.396067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.396089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.396097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.402102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.402124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.402132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.407393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.407414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.407423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.412694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.412716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.412724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.417973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.417994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.418002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.423433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.423455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.423463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.428904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.428927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.428935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.434265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.434287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.434295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.439726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.439748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.439760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.445081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.445103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.445111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.450373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.450396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.450404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.455760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.455782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.455791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.461165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.461186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.461194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.466491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.466513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.471568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.608 [2024-12-06 15:44:52.471589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.608 [2024-12-06 15:44:52.471598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.608 [2024-12-06 15:44:52.476244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.476266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.476274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.479263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.479284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.479292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.484531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.484555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.484564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.489855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.489876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.489884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.495268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.495289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.495298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.500026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.500050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.500058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.505179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.505202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.505210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.510435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.510457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.510465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.515555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.515577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.515585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.520835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.520857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.520865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.525944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.525965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.525973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.531088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.531110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.531118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.536342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.536364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.536377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.541573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.541594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.541603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.546942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.546963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.546971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.552377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.552400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.552409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.557916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.557937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.557945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.563151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.563173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.563182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.568435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.568457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.568465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.573707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.573729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.573740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.579029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.579051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.579059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.584291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.584313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.584321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.589528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.589551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.589559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.594879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.594900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.594908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.609 [2024-12-06 15:44:52.600293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.609 [2024-12-06 15:44:52.600316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.609 [2024-12-06 15:44:52.600325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:46.869 [2024-12-06 15:44:52.605312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.869 [2024-12-06 15:44:52.605333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.869 [2024-12-06 15:44:52.605341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:46.869 [2024-12-06 15:44:52.610813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x655dd0) 00:27:46.869 [2024-12-06 15:44:52.610835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.869 [2024-12-06 15:44:52.610842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.869 5926.00 IOPS, 740.75 MiB/s 00:27:46.869 Latency(us) 00:27:46.869 [2024-12-06T14:44:52.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.869 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:46.869 nvme0n1 : 2.00 5927.52 740.94 0.00 0.00 2696.63 631.95 10985.08 00:27:46.869 [2024-12-06T14:44:52.867Z] =================================================================================================================== 00:27:46.869 [2024-12-06T14:44:52.867Z] Total : 5927.52 740.94 0.00 0.00 2696.63 631.95 10985.08 00:27:46.869 { 00:27:46.869 "results": [ 00:27:46.869 { 00:27:46.869 "job": "nvme0n1", 00:27:46.869 "core_mask": "0x2", 00:27:46.869 "workload": "randread", 00:27:46.869 "status": "finished", 00:27:46.869 "queue_depth": 16, 00:27:46.869 "io_size": 131072, 00:27:46.869 "runtime": 2.002188, 00:27:46.869 "iops": 5927.515298263699, 00:27:46.869 "mibps": 740.9394122829624, 00:27:46.869 "io_failed": 0, 00:27:46.869 "io_timeout": 0, 00:27:46.869 "avg_latency_us": 2696.6317774888853, 00:27:46.869 "min_latency_us": 631.9542857142857, 00:27:46.869 "max_latency_us": 10985.081904761904 00:27:46.869 } 00:27:46.869 ], 00:27:46.869 "core_count": 1 00:27:46.869 } 00:27:46.869 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:46.869 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:46.869 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:46.870 | .driver_specific 00:27:46.870 | .nvme_error 00:27:46.870 | .status_code 00:27:46.870 | .command_transient_transport_error' 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3158402 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3158402 ']' 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3158402 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:46.870 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158402 00:27:47.129 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.129 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.129 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158402' 00:27:47.129 killing process with pid 3158402 00:27:47.129 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3158402 00:27:47.129 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.129 00:27:47.129 Latency(us) 00:27:47.129 [2024-12-06T14:44:53.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.129 [2024-12-06T14:44:53.127Z] =================================================================================================================== 00:27:47.129 [2024-12-06T14:44:53.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.129 15:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3158402 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3158890 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3158890 /var/tmp/bperf.sock 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3158890 ']' 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.129 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:47.129 [2024-12-06 15:44:53.094884] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:47.130 [2024-12-06 15:44:53.094944] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3158890 ] 00:27:47.388 [2024-12-06 15:44:53.169139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.388 [2024-12-06 15:44:53.207774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.388 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.388 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:47.388 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:47.388 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:47.683 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:47.683 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.683 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.683 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.683 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:47.683 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:47.941 nvme0n1 00:27:47.941 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:47.941 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.942 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:47.942 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.942 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:47.942 15:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:47.942 Running I/O for 2 seconds... 00:27:47.942 [2024-12-06 15:44:53.893913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eef6a8 00:27:47.942 [2024-12-06 15:44:53.895032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.942 [2024-12-06 15:44:53.895062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:47.942 [2024-12-06 15:44:53.903169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee1f80 00:27:47.942 [2024-12-06 15:44:53.903801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.942 [2024-12-06 15:44:53.903825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:47.942 [2024-12-06 15:44:53.911693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3a28 00:27:47.942 [2024-12-06 15:44:53.912685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.942 [2024-12-06 15:44:53.912705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:47.942 [2024-12-06 15:44:53.920921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeb760 00:27:47.942 [2024-12-06 15:44:53.921431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.942 [2024-12-06 15:44:53.921451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:47.942 [2024-12-06 15:44:53.929530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7da8 00:27:47.942 [2024-12-06 15:44:53.930358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.942 [2024-12-06 15:44:53.930382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:53.939075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef2d80 00:27:48.202 [2024-12-06 15:44:53.940046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:53.940065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:53.948601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeff18 00:27:48.202 [2024-12-06 15:44:53.949663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:53.949682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:53.956867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edece0 00:27:48.202 [2024-12-06 15:44:53.957494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:53.957514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:53.965086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef92c0 00:27:48.202 [2024-12-06 15:44:53.965789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:53.965808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:53.974414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eea248 00:27:48.202 [2024-12-06 15:44:53.975248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:53.975267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:53.985274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef9f68 00:27:48.202 [2024-12-06 15:44:53.986487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:53.986506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:53.992963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef1430 00:27:48.202 [2024-12-06 15:44:53.993471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:53.993491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.002144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee38d0 00:27:48.202 [2024-12-06 15:44:54.002886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.002906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.010301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8e88 00:27:48.202 [2024-12-06 15:44:54.011115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.011134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.019630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef5378 00:27:48.202 [2024-12-06 15:44:54.020603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.020622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.028679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efc560 00:27:48.202 [2024-12-06 15:44:54.029181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.029201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.038062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5658 00:27:48.202 [2024-12-06 15:44:54.038688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.038710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.047537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efa7d8 00:27:48.202 [2024-12-06 15:44:54.048268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.048292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.057955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eedd58 00:27:48.202 [2024-12-06 15:44:54.059519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.059540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.064631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef6458 00:27:48.202 [2024-12-06 15:44:54.065498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.202 [2024-12-06 15:44:54.065519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:48.202 [2024-12-06 15:44:54.075580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee38d0 00:27:48.202 [2024-12-06 15:44:54.076823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.076843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.084040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eee5c8 00:27:48.203 [2024-12-06 15:44:54.085246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.085265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.092327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee38d0 00:27:48.203 [2024-12-06 15:44:54.093086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.093104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.101411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee9e10 00:27:48.203 [2024-12-06 15:44:54.102402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.102422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.110446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef2948 00:27:48.203 [2024-12-06 15:44:54.111321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.111341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.119628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef1430 00:27:48.203 [2024-12-06 15:44:54.120706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.120725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.128667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eebfd0 00:27:48.203 [2024-12-06 15:44:54.129665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.129687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.137573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7100 00:27:48.203 [2024-12-06 15:44:54.138569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.138588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.146491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efe2e8 00:27:48.203 [2024-12-06 15:44:54.147535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.147555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.155727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eef270 00:27:48.203 [2024-12-06 15:44:54.156750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.156770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.165060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef92c0 00:27:48.203 [2024-12-06 15:44:54.166292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.166310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.172458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5220 00:27:48.203 [2024-12-06 15:44:54.173098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.173117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.181676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8618 00:27:48.203 [2024-12-06 15:44:54.182514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.182533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:48.203 [2024-12-06 15:44:54.191042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eebb98 00:27:48.203 [2024-12-06 15:44:54.192051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.203 [2024-12-06 15:44:54.192070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:48.463 [2024-12-06 15:44:54.202401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee0630 00:27:48.463 [2024-12-06 15:44:54.203996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.463 [2024-12-06 15:44:54.204016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:48.463 [2024-12-06 15:44:54.208766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef6cc8 00:27:48.463 [2024-12-06 15:44:54.209424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.463 [2024-12-06 15:44:54.209443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:48.463 [2024-12-06 15:44:54.218118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef96f8 00:27:48.463 [2024-12-06 15:44:54.219089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.463 [2024-12-06 15:44:54.219108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:48.463 [2024-12-06 15:44:54.227460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef31b8 00:27:48.463 [2024-12-06 15:44:54.228590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.463 [2024-12-06 15:44:54.228610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:48.463 [2024-12-06 15:44:54.235848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eefae0 00:27:48.463 [2024-12-06 15:44:54.236817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.463 [2024-12-06 15:44:54.236837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:48.463 [2024-12-06 15:44:54.244802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efe2e8 00:27:48.463 [2024-12-06 15:44:54.245562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.245582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.254957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee9e10 00:27:48.464 [2024-12-06 15:44:54.256173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.256192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.264046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edf988 00:27:48.464 [2024-12-06 15:44:54.265262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.265281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.272078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef46d0 00:27:48.464 [2024-12-06 15:44:54.273344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.273363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.281174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed4e8 00:27:48.464 [2024-12-06 15:44:54.282078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.282098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.290587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee9e10 00:27:48.464 [2024-12-06 15:44:54.291502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.291521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.299953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efdeb0 00:27:48.464 [2024-12-06 15:44:54.301185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.301205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.309267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee99d8 00:27:48.464 [2024-12-06 15:44:54.310629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.310648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.317548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee12d8 00:27:48.464 [2024-12-06 15:44:54.318542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.318562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.327492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eedd58 00:27:48.464 [2024-12-06 15:44:54.328921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.328940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.333774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ede8a8 00:27:48.464 [2024-12-06 15:44:54.334422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.334441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.343097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef35f0 00:27:48.464 [2024-12-06 15:44:54.343868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.343887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.352031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3a28 00:27:48.464 [2024-12-06 15:44:54.352886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.352906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.361955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef9b30 00:27:48.464 [2024-12-06 15:44:54.362954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.362980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.371163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5220 00:27:48.464 [2024-12-06 15:44:54.372271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.372290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.379610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef1868 00:27:48.464 [2024-12-06 15:44:54.380704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.380723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.387924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee9168 00:27:48.464 [2024-12-06 15:44:54.388702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.388721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.397844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeaab8 00:27:48.464 [2024-12-06 15:44:54.399085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.399105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.406439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eec408 00:27:48.464 [2024-12-06 15:44:54.407252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.415485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee4de8 00:27:48.464 [2024-12-06 15:44:54.416289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.416308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.424354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee4de8 00:27:48.464 [2024-12-06 15:44:54.425233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.425252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.433232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee4de8 00:27:48.464 [2024-12-06 15:44:54.434108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.434127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.442110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee4de8 00:27:48.464 [2024-12-06 15:44:54.443004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.443023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.464 [2024-12-06 15:44:54.450388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edfdc0 00:27:48.464 [2024-12-06 15:44:54.451255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.464 [2024-12-06 15:44:54.451275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.724 [2024-12-06 15:44:54.459674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed4e8 00:27:48.724 [2024-12-06 15:44:54.460575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.724 [2024-12-06 15:44:54.460595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:48.724 [2024-12-06 15:44:54.469020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3a28 00:27:48.724 [2024-12-06 15:44:54.469897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.724 [2024-12-06 15:44:54.469917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:48.724 [2024-12-06 15:44:54.478101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efc560 00:27:48.724 [2024-12-06 15:44:54.478889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.724 [2024-12-06 15:44:54.478909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.724 [2024-12-06 15:44:54.487424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeea00 00:27:48.724 [2024-12-06 15:44:54.488091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.724 [2024-12-06 15:44:54.488111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:48.724 [2024-12-06 15:44:54.495712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeb760 00:27:48.724 [2024-12-06 15:44:54.496513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.724 [2024-12-06 15:44:54.496532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:48.724 [2024-12-06 15:44:54.504703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee88f8 00:27:48.724 [2024-12-06 15:44:54.505352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.724 [2024-12-06 15:44:54.505378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:48.724 [2024-12-06 15:44:54.514330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efd640 00:27:48.725 [2024-12-06 15:44:54.515357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.515380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.523603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee12d8 00:27:48.725 [2024-12-06 15:44:54.524612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.524632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.532800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edece0 00:27:48.725 [2024-12-06 15:44:54.533804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.533824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.541242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef0ff8 00:27:48.725 [2024-12-06 15:44:54.542153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.542172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.551856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef0ff8 00:27:48.725 [2024-12-06 15:44:54.553232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.553251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.559573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef31b8 00:27:48.725 [2024-12-06 15:44:54.560480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.560506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.568874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eecc78 00:27:48.725 [2024-12-06 15:44:54.570116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.570134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.577280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eedd58 00:27:48.725 [2024-12-06 15:44:54.578218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.578238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.586831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee01f8 00:27:48.725 [2024-12-06 15:44:54.587860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.587880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.596046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee88f8 00:27:48.725 [2024-12-06 15:44:54.597185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.597208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.603375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef9b30 00:27:48.725 [2024-12-06 15:44:54.603973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.603994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.613505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5220 00:27:48.725 [2024-12-06 15:44:54.614644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.614664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.622514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef4298 00:27:48.725 [2024-12-06 15:44:54.623203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.623223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.631232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeaab8 00:27:48.725 [2024-12-06 15:44:54.632152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.632172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.640326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef35f0 00:27:48.725 [2024-12-06 15:44:54.641281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.641300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.649664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eddc00 00:27:48.725 [2024-12-06 15:44:54.650728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.650748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.659223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edf118 00:27:48.725 [2024-12-06 15:44:54.660424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.660444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.668730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee01f8 00:27:48.725 [2024-12-06 15:44:54.670172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.670192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.675277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee8088 00:27:48.725 [2024-12-06 15:44:54.675972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.675991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.686501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee3d08 00:27:48.725 [2024-12-06 15:44:54.687590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.687610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.694241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef57b0 00:27:48.725 [2024-12-06 15:44:54.694860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.725 [2024-12-06 15:44:54.694880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:48.725 [2024-12-06 15:44:54.703446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efdeb0 00:27:48.725 [2024-12-06 15:44:54.704172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.726 [2024-12-06 15:44:54.704192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:48.726 [2024-12-06 15:44:54.713039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef6cc8 00:27:48.726 [2024-12-06 15:44:54.714119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.726 [2024-12-06 15:44:54.714138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.722376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee23b8 00:27:48.986 [2024-12-06 15:44:54.723010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.723031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.731614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efc128 00:27:48.986 [2024-12-06 15:44:54.732563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.732583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.740518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef6cc8 00:27:48.986 [2024-12-06 15:44:54.741398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.741417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.748836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee6fa8 00:27:48.986 [2024-12-06 15:44:54.749690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.749709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.757927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee12d8 00:27:48.986 [2024-12-06 15:44:54.758894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.758915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.767100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef0350 00:27:48.986 [2024-12-06 15:44:54.767724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.767745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.775654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee6738 00:27:48.986 [2024-12-06 15:44:54.776702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.776723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.784661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7538 00:27:48.986 [2024-12-06 15:44:54.785525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.785545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.792947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeff18 00:27:48.986 [2024-12-06 15:44:54.793659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.793679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.801517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee4578 00:27:48.986 [2024-12-06 15:44:54.802133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.802153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.810898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee1b48 00:27:48.986 [2024-12-06 15:44:54.811746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.986 [2024-12-06 15:44:54.811765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:48.986 [2024-12-06 15:44:54.822002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef2d80 00:27:48.986 [2024-12-06 15:44:54.823318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.823337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.830362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efb048 00:27:48.987 [2024-12-06 15:44:54.831480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.831503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.839137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edfdc0 00:27:48.987 [2024-12-06 15:44:54.839906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.839927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.847907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efdeb0 00:27:48.987 [2024-12-06 15:44:54.848770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.848790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.856852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef2948 00:27:48.987 [2024-12-06 15:44:54.857729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.857748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.865768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef4f40 00:27:48.987 [2024-12-06 15:44:54.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.866669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.874676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef9f68 00:27:48.987 [2024-12-06 15:44:54.875532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.875551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.883836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed0b0 00:27:48.987 [2024-12-06 15:44:54.885495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.885514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:48.987 28247.00 IOPS, 110.34 MiB/s [2024-12-06T14:44:54.985Z] [2024-12-06 15:44:54.892975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ede470 00:27:48.987 [2024-12-06 15:44:54.893856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.893875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.901926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee38d0 00:27:48.987 [2024-12-06 15:44:54.902839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.902859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.910534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee95a0 00:27:48.987 [2024-12-06 15:44:54.911436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.911456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.920650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7da8 00:27:48.987 [2024-12-06 15:44:54.921543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.921564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.929103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee3060 00:27:48.987 [2024-12-06 15:44:54.930055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.930074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.939074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed920 00:27:48.987 [2024-12-06 15:44:54.940166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.940185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.948012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee49b0 00:27:48.987 [2024-12-06 15:44:54.949116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.949135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.957208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee3060 00:27:48.987 [2024-12-06 15:44:54.958431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.958450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.965139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee1b48 00:27:48.987 [2024-12-06 15:44:54.966417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.966436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:48.987 [2024-12-06 15:44:54.973379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef96f8 00:27:48.987 [2024-12-06 15:44:54.974118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.987 [2024-12-06 15:44:54.974136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:54.981879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee1710 00:27:49.245 [2024-12-06 15:44:54.982635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:54.982654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:54.992925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edece0 00:27:49.245 [2024-12-06 15:44:54.994011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:54.994030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.001969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efe2e8 00:27:49.245 [2024-12-06 15:44:55.003071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.003091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.011215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efa3a0 00:27:49.245 [2024-12-06 15:44:55.012410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.012429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.018869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee1b48 00:27:49.245 [2024-12-06 15:44:55.019396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.019414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.028169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3e60 00:27:49.245 [2024-12-06 15:44:55.028820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.028840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.037244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef2d80 00:27:49.245 [2024-12-06 15:44:55.038210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.038229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.046185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed4e8 00:27:49.245 [2024-12-06 15:44:55.047151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.047169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.055668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeaab8 00:27:49.245 [2024-12-06 15:44:55.056442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.056462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.064751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efb8b8 00:27:49.245 [2024-12-06 15:44:55.065833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.065854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.073010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee8088 00:27:49.245 [2024-12-06 15:44:55.074347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.074371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.081262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee12d8 00:27:49.245 [2024-12-06 15:44:55.082012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.082031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.090366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efda78 00:27:49.245 [2024-12-06 15:44:55.091126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.091145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.099303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eee5c8 00:27:49.245 [2024-12-06 15:44:55.100077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.100096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.108266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef1430 00:27:49.245 [2024-12-06 15:44:55.109060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.109081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.117180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eefae0 00:27:49.245 [2024-12-06 15:44:55.117944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.117964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.126387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eebfd0 00:27:49.245 [2024-12-06 15:44:55.127216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.127236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.134839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8a50 00:27:49.245 [2024-12-06 15:44:55.135706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.135725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.145843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee01f8 00:27:49.245 [2024-12-06 15:44:55.147219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.147241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.152997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7538 00:27:49.245 [2024-12-06 15:44:55.153881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.153901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.163275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5658 00:27:49.245 [2024-12-06 15:44:55.164273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.164293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.172325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efeb58 00:27:49.245 [2024-12-06 15:44:55.173331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.173350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.181382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee88f8 00:27:49.245 [2024-12-06 15:44:55.182357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.182379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.190308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee99d8 00:27:49.245 [2024-12-06 15:44:55.191295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.191316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.199234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef6890 00:27:49.245 [2024-12-06 15:44:55.200236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.245 [2024-12-06 15:44:55.200255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.245 [2024-12-06 15:44:55.208179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efe2e8 00:27:49.246 [2024-12-06 15:44:55.209170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.246 [2024-12-06 15:44:55.209189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.246 [2024-12-06 15:44:55.216520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee7c50 00:27:49.246 [2024-12-06 15:44:55.217495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.246 [2024-12-06 15:44:55.217515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:49.246 [2024-12-06 15:44:55.225689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eddc00 00:27:49.246 [2024-12-06 15:44:55.226237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.246 [2024-12-06 15:44:55.226258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:49.246 [2024-12-06 15:44:55.235977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef57b0 00:27:49.246 [2024-12-06 15:44:55.237297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.246 [2024-12-06 15:44:55.237316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.244491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeb328 00:27:49.503 [2024-12-06 15:44:55.245508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.245528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.254519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef9b30 00:27:49.503 [2024-12-06 15:44:55.255954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.255973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.260829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed920 00:27:49.503 [2024-12-06 15:44:55.261469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.261489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.270181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef57b0 00:27:49.503 [2024-12-06 15:44:55.270938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.270958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.278496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efb8b8 00:27:49.503 [2024-12-06 15:44:55.279128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.279148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.288720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3a28 00:27:49.503 [2024-12-06 15:44:55.290154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.290174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.297128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5a90 00:27:49.503 [2024-12-06 15:44:55.297893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.297913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.306071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efb480 00:27:49.503 [2024-12-06 15:44:55.306824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.306845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.315280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef1868 00:27:49.503 [2024-12-06 15:44:55.316150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.316169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.324336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee95a0 00:27:49.503 [2024-12-06 15:44:55.325185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.325204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.333267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee73e0 00:27:49.503 [2024-12-06 15:44:55.334139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.334159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.503 [2024-12-06 15:44:55.342180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef35f0 00:27:49.503 [2024-12-06 15:44:55.343059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.503 [2024-12-06 15:44:55.343079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.351235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee84c0 00:27:49.504 [2024-12-06 15:44:55.352132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.352154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.360310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeea00 00:27:49.504 [2024-12-06 15:44:55.361195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.361214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.369299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed920 00:27:49.504 [2024-12-06 15:44:55.370159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.370178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.378229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeff18 00:27:49.504 [2024-12-06 15:44:55.379097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.379120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.386601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8e88 00:27:49.504 [2024-12-06 15:44:55.387452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.387470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.395645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eedd58 00:27:49.504 [2024-12-06 15:44:55.396519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.396538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.404334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef4298 00:27:49.504 [2024-12-06 15:44:55.405236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.405256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.415684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efeb58 00:27:49.504 [2024-12-06 15:44:55.417029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.417049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.425180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eedd58 00:27:49.504 [2024-12-06 15:44:55.426721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.426741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.431793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee7818 00:27:49.504 [2024-12-06 15:44:55.432549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.432569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.441170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7da8 00:27:49.504 [2024-12-06 15:44:55.442074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.442092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.450061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8e88 00:27:49.504 [2024-12-06 15:44:55.450625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.450645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.459960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8e88 00:27:49.504 [2024-12-06 15:44:55.461072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.461092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.468268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee23b8 00:27:49.504 [2024-12-06 15:44:55.468934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.468953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.476418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eecc78 00:27:49.504 [2024-12-06 15:44:55.477173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.477192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.487405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef1ca0 00:27:49.504 [2024-12-06 15:44:55.488540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.488560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:49.504 [2024-12-06 15:44:55.494962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7970 00:27:49.504 [2024-12-06 15:44:55.495416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.504 [2024-12-06 15:44:55.495438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.506630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eefae0 00:27:49.763 [2024-12-06 15:44:55.508113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.508133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.512958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee6738 00:27:49.763 [2024-12-06 15:44:55.513621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.513641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.523831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef92c0 00:27:49.763 [2024-12-06 15:44:55.525090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.525110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.532225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee6b70 00:27:49.763 [2024-12-06 15:44:55.533196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.533216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.541142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef31b8 00:27:49.763 [2024-12-06 15:44:55.541960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.541980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.551095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efc560 00:27:49.763 [2024-12-06 15:44:55.552355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.552378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.557659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef81e0 00:27:49.763 [2024-12-06 15:44:55.558238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.558257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.568996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed4e8 00:27:49.763 [2024-12-06 15:44:55.570151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.570170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.577365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eff3c8 00:27:49.763 [2024-12-06 15:44:55.578275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.578295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.586480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eee5c8 00:27:49.763 [2024-12-06 15:44:55.587418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.587437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.597450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efdeb0 00:27:49.763 [2024-12-06 15:44:55.598869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.598888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.606760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef2948 00:27:49.763 [2024-12-06 15:44:55.608316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.608336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.613264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed0b0 00:27:49.763 [2024-12-06 15:44:55.613898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.613921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.622872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8618 00:27:49.763 [2024-12-06 15:44:55.623862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.623882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.633883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3a28 00:27:49.763 [2024-12-06 15:44:55.635377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.635397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.640360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3a28 00:27:49.763 [2024-12-06 15:44:55.641113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.641132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.651485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8618 00:27:49.763 [2024-12-06 15:44:55.652738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.652758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.659899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef8a50 00:27:49.763 [2024-12-06 15:44:55.660867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.660887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.668981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eeea00 00:27:49.763 [2024-12-06 15:44:55.669907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.763 [2024-12-06 15:44:55.669928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:49.763 [2024-12-06 15:44:55.679268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed4e8 00:27:49.764 [2024-12-06 15:44:55.680644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.680664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.685776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef3a28 00:27:49.764 [2024-12-06 15:44:55.686448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.686467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.696865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efda78 00:27:49.764 [2024-12-06 15:44:55.698004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.698024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.705920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef92c0 00:27:49.764 [2024-12-06 15:44:55.707055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.707076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.715259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee1f80 00:27:49.764 [2024-12-06 15:44:55.716420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.716440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.723565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eedd58 00:27:49.764 [2024-12-06 15:44:55.724611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.724630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.733908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eedd58 00:27:49.764 [2024-12-06 15:44:55.735401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.735419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.740202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5658 00:27:49.764 [2024-12-06 15:44:55.740868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.740887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:49.764 [2024-12-06 15:44:55.749294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef0350 00:27:49.764 [2024-12-06 15:44:55.749878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.764 [2024-12-06 15:44:55.749899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.758655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efb048 00:27:50.023 [2024-12-06 15:44:55.759433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.759454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.768281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eed4e8 00:27:50.023 [2024-12-06 15:44:55.769327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.769347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.779334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee1710 00:27:50.023 [2024-12-06 15:44:55.780866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.780884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.785949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016efc560 00:27:50.023 [2024-12-06 15:44:55.786780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.786799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.796978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee23b8 00:27:50.023 [2024-12-06 15:44:55.798295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.798314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.805384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef35f0 00:27:50.023 [2024-12-06 15:44:55.806467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.806488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.814385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5a90 00:27:50.023 [2024-12-06 15:44:55.815461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.815481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.823685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef4b08 00:27:50.023 [2024-12-06 15:44:55.824901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.824921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.832738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef7538 00:27:50.023 [2024-12-06 15:44:55.833871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.833892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.841360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee6fa8 00:27:50.023 [2024-12-06 15:44:55.842255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.842275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.849547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016edf988 00:27:50.023 [2024-12-06 15:44:55.850429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.850449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.858873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5a90 00:27:50.023 [2024-12-06 15:44:55.859864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.859885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.868332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ee5a90 00:27:50.023 [2024-12-06 15:44:55.869335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.869354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.876844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016ef2948 00:27:50.023 [2024-12-06 15:44:55.877870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.877890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:50.023 [2024-12-06 15:44:55.885893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x579d90) with pdu=0x200016eec840 00:27:50.023 [2024-12-06 15:44:55.887466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.023 [2024-12-06 15:44:55.887487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:50.023 28260.00 IOPS, 110.39 MiB/s 00:27:50.023 Latency(us) 00:27:50.023 [2024-12-06T14:44:56.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.023 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.023 nvme0n1 : 2.00 28294.32 110.52 0.00 0.00 4519.80 1763.23 12483.05 00:27:50.023 [2024-12-06T14:44:56.021Z] =================================================================================================================== 00:27:50.023 [2024-12-06T14:44:56.021Z] Total : 28294.32 110.52 0.00 0.00 4519.80 1763.23 12483.05 00:27:50.023 { 00:27:50.023 "results": [ 00:27:50.023 { 00:27:50.023 "job": "nvme0n1", 00:27:50.023 "core_mask": "0x2", 00:27:50.023 "workload": "randwrite", 00:27:50.023 "status": "finished", 00:27:50.023 "queue_depth": 128, 00:27:50.023 "io_size": 4096, 00:27:50.023 "runtime": 2.002098, 00:27:50.023 "iops": 28294.319259097207, 00:27:50.023 "mibps": 110.52468460584846, 00:27:50.023 "io_failed": 0, 00:27:50.023 "io_timeout": 0, 00:27:50.023 "avg_latency_us": 4519.799692537374, 00:27:50.023 "min_latency_us": 1763.230476190476, 00:27:50.023 "max_latency_us": 12483.047619047618 00:27:50.023 } 00:27:50.023 ], 00:27:50.023 "core_count": 1 00:27:50.023 } 00:27:50.023 15:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:50.023 15:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:50.023 15:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:50.023 | .driver_specific 00:27:50.023 | .nvme_error 00:27:50.023 | .status_code 00:27:50.023 | .command_transient_transport_error' 00:27:50.023 15:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3158890 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3158890 ']' 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3158890 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158890 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158890' 00:27:50.283 killing process with pid 3158890 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3158890 00:27:50.283 Received shutdown signal, test time was about 2.000000 seconds 00:27:50.283 00:27:50.283 Latency(us) 00:27:50.283 [2024-12-06T14:44:56.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.283 [2024-12-06T14:44:56.281Z] =================================================================================================================== 00:27:50.283 [2024-12-06T14:44:56.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.283 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3158890 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3159568 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3159568 /var/tmp/bperf.sock 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3159568 ']' 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:50.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.542 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.542 [2024-12-06 15:44:56.363520] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:27:50.543 [2024-12-06 15:44:56.363573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159568 ] 00:27:50.543 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:50.543 Zero copy mechanism will not be used. 00:27:50.543 [2024-12-06 15:44:56.439459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.543 [2024-12-06 15:44:56.476134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.800 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.800 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:50.800 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:50.800 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:50.800 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:50.801 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.801 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.801 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.801 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.801 15:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.367 nvme0n1 00:27:51.367 15:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:51.367 15:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.367 15:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.367 15:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.367 15:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:51.367 15:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:51.367 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:51.367 Zero copy mechanism will not be used. 00:27:51.367 Running I/O for 2 seconds... 00:27:51.367 [2024-12-06 15:44:57.186938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.367 [2024-12-06 15:44:57.187027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.367 [2024-12-06 15:44:57.187056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.367 [2024-12-06 15:44:57.191357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.367 [2024-12-06 15:44:57.191432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.367 [2024-12-06 15:44:57.191453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.367 [2024-12-06 15:44:57.195660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.367 [2024-12-06 15:44:57.195742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.367 [2024-12-06 15:44:57.195762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.367 [2024-12-06 15:44:57.199822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.199897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.199917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.203954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.204029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.204048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.208116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.208197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.208217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.212213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.212269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.212288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.216305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.216365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.216391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.220420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.220482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.220502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.224503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.224572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.224591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.228561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.228626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.228645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.232612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.232678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.232698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.236728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.236784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.236803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.240873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.240939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.240959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.244979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.245037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.245056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.248979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.249036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.249056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.253047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.253113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.253132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.257125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.257201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.257220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.261192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.261252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.261271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.265220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.265290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.265309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.269263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.269327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.269349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.273340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.273406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.273425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.277429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.277490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.277509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.281430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.281494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.281513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.285364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.285450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.285469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.289345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.289431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.289450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.293327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.293396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.293415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.297318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.297384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.297403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.301281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.301345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.301364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.305257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.368 [2024-12-06 15:44:57.305329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.368 [2024-12-06 15:44:57.305349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.368 [2024-12-06 15:44:57.309363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.309433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.309453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.313390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.313496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.313515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.318042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.318160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.318180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.322275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.322333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.322353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.326556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.326620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.326640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.331449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.331499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.331518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.336516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.336567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.336586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.341567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.341622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.341641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.346545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.346603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.346632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.351496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.351552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.356791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.356876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.356895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.369 [2024-12-06 15:44:57.362034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.369 [2024-12-06 15:44:57.362089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.369 [2024-12-06 15:44:57.362109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.367074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.367165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.367184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.371910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.371983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.372003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.376805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.376863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.376882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.382033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.382087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.382105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.387918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.387985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.388008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.392668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.392740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.392759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.397392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.397455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.397474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.401746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.401803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.401821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.406115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.406190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.406209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.410409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.410466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.410485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.414751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.414815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.414834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.419029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.419081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.419100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.423301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.423405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.423424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.428098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.428160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.428185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.432458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.432527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.436827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.436895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.436914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.441205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.441259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.628 [2024-12-06 15:44:57.441279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.628 [2024-12-06 15:44:57.445644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.628 [2024-12-06 15:44:57.445706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.445724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.450044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.450099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.450118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.454925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.454980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.454999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.459412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.459476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.459495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.463873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.463939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.463958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.468220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.468272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.468292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.472575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.472641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.472660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.476938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.476991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.477011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.481238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.481338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.481357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.485676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.485739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.485759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.490012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.490075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.490094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.494569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.494649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.494667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.499335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.499408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.499427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.503745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.503808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.503827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.508222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.508324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.508344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.512576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.512652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.512671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.516959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.517017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.517036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.521246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.521305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.521324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.526511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.526579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.526598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.531299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.531396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.531416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.537764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.537902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.537922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.544537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.544634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.544653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.551307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.551392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.551415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.556798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.556882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.556903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.562144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.562200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.562218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.567478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.567541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.567560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.572433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.572501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.629 [2024-12-06 15:44:57.572521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.629 [2024-12-06 15:44:57.577128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.629 [2024-12-06 15:44:57.577186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.577206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.581638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.581743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.581761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.586377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.586430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.586449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.591457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.591522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.591540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.596403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.596514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.596534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.601502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.601556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.601575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.606857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.606948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.606967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.612051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.612121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.612140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.616736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.616856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.616874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.630 [2024-12-06 15:44:57.621914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.630 [2024-12-06 15:44:57.621971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.630 [2024-12-06 15:44:57.621990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.627438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.627493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.627512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.633282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.633350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.633376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.638581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.638668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.638686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.644773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.644941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.644962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.651573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.651667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.651687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.658494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.658647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.658666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.664684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.664839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.664862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.671361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.671547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.671568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.678104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.678252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.678272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.684288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.684379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.684399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.690959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.691130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.691149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.698087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.698155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.698179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.703516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.703568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.703587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.708974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.709071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.709091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.714364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.714443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.714462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.721123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.721277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.721297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.727934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.728094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.728116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.734609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.734756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.734775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.741667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.741844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.741865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.748300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.748458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.748480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.755123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.755275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.755293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.762481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.762620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.762639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.769280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.769448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.769469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.776754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.890 [2024-12-06 15:44:57.776936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.890 [2024-12-06 15:44:57.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.890 [2024-12-06 15:44:57.783168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.783268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.783288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.787999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.788057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.788075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.792602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.792687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.792707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.797305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.797401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.797420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.802056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.802220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.802241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.806933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.807001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.807020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.811385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.811530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.811548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.816876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.817041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.817062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.822475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.822730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.822751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.827009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.827243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.827263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.832168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.832396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.832417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.836899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.837139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.837159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.841860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.842072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.842093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.846808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.847041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.847066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.852076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.852313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.852334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.857591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.857882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.857904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.863834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.864085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.864105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.869236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.869456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.869475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.874681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.874936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.874957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.891 [2024-12-06 15:44:57.880116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:51.891 [2024-12-06 15:44:57.880381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.891 [2024-12-06 15:44:57.880403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.885354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.885605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.885625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.890659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.890896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.890917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.895440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.895708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.895729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.900195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.900440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.900460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.904865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.905100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.905122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.909270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.909616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.909637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.915086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.915441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.915461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.920272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.920526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.920546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.924893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.925126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.929359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.929614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.929634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.933758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.933985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.151 [2024-12-06 15:44:57.934005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.151 [2024-12-06 15:44:57.938201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.151 [2024-12-06 15:44:57.938450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.938471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.942604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.942845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.942864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.947034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.947277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.947298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.951642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.951875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.951896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.956082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.956352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.956379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.960698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.960938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.960959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.965834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.966072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.966092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.970547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.970797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.970818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.975946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.976191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.976216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.981544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.981793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.981813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.987203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.987457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.987478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:57.993944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:57.994209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:57.994231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.000866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.001110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.007102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.007325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.007346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.013099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.013350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.013376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.019407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.019655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.019675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.025752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.026037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.026057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.032176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.032467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.032488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.038482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.038752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.038772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.044258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.044527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.044547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.050262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.050501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.050523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.055278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.055525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.055547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.059632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.059882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.059902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.063581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.063815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.063836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.067501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.067751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.067771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.071421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.071664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.071683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.075356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.075599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.152 [2024-12-06 15:44:58.075620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.152 [2024-12-06 15:44:58.079294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.152 [2024-12-06 15:44:58.079536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.079556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.083197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.083450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.083471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.087095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.087331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.087351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.091030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.091277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.091297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.094936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.095176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.095197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.099095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.099337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.099357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.103173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.103431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.103452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.107066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.107302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.107326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.110973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.111239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.114880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.115123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.115144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.118797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.119042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.119062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.123043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.123288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.123308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.127177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.127425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.127445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.131085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.131330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.131350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.135065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.135313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.135333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.138961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.139195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.139215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.153 [2024-12-06 15:44:58.142902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.153 [2024-12-06 15:44:58.143148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.153 [2024-12-06 15:44:58.143169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.413 [2024-12-06 15:44:58.147010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.413 [2024-12-06 15:44:58.147260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.413 [2024-12-06 15:44:58.147281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.413 [2024-12-06 15:44:58.151233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.413 [2024-12-06 15:44:58.151482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.413 [2024-12-06 15:44:58.151503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.413 [2024-12-06 15:44:58.155256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.413 [2024-12-06 15:44:58.155519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.413 [2024-12-06 15:44:58.155541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.413 [2024-12-06 15:44:58.159670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.413 [2024-12-06 15:44:58.159907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.159928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.164414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.164656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.164676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.169175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.169433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.169454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.173904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.174151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.174171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.178973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.179219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.179240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.414 6262.00 IOPS, 782.75 MiB/s [2024-12-06T14:44:58.412Z] [2024-12-06 15:44:58.184749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.184993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.185014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.190552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.190777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.190796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.195805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.195883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.195903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.200862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.201094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.201115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.206174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.206435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.206456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.212070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.212316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.218954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.219236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.219257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.225257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.225499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.225521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.231290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.231573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.231597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.237498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.237716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.237736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.242726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.242954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.242975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.247959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.248145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.248165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.253263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.253550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.253571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.259279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.259600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.259623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.265747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.265986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.266007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.271021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.271237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.271258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.276059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.276270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.276290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.281305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.281562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.281583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.286540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.286771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.286792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.291918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.292220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.292240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.297775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.298104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.298125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.304397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.414 [2024-12-06 15:44:58.304715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.414 [2024-12-06 15:44:58.304735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.414 [2024-12-06 15:44:58.311244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.311389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.311409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.317319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.317661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.317683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.324330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.324583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.324605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.331556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.331776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.331797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.337492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.337713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.337734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.343589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.343827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.343848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.349188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.349416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.349435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.353541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.353759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.353780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.357557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.357780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.357801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.361643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.361862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.361883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.366241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.366474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.366496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.371978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.372254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.372275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.377393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.377614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.377642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.381789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.382052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.382073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.386217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.386437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.386458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.390693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.390915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.390936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.395125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.395355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.395383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.399525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.399774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.399795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.415 [2024-12-06 15:44:58.404074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.415 [2024-12-06 15:44:58.404315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.415 [2024-12-06 15:44:58.404337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.683 [2024-12-06 15:44:58.408758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.683 [2024-12-06 15:44:58.408979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.683 [2024-12-06 15:44:58.409001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.683 [2024-12-06 15:44:58.413247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.683 [2024-12-06 15:44:58.413491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.683 [2024-12-06 15:44:58.413512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.683 [2024-12-06 15:44:58.417664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.683 [2024-12-06 15:44:58.417883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.417907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.422136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.422352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.422379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.426774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.426996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.427017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.431288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.431518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.431539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.435699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.435920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.435940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.440161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.440395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.440416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.444544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.444763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.444783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.448885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.449132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.449152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.684 [2024-12-06 15:44:58.453326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.684 [2024-12-06 15:44:58.453553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.684 [2024-12-06 15:44:58.453572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.457728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.457982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.458004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.462067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.462294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.462316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.466519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.466770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.466792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.471068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.471315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.471336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.475473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.475715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.475735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.479852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.480076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.480096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.484921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.485195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.485215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.490864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.491092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.685 [2024-12-06 15:44:58.491112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.685 [2024-12-06 15:44:58.495501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.685 [2024-12-06 15:44:58.495728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.686 [2024-12-06 15:44:58.495748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.686 [2024-12-06 15:44:58.499954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.686 [2024-12-06 15:44:58.500180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.686 [2024-12-06 15:44:58.500201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.686 [2024-12-06 15:44:58.504283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.686 [2024-12-06 15:44:58.504502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.686 [2024-12-06 15:44:58.504522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.686 [2024-12-06 15:44:58.508746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.686 [2024-12-06 15:44:58.508970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.686 [2024-12-06 15:44:58.508991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.686 [2024-12-06 15:44:58.513274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.686 [2024-12-06 15:44:58.513511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.686 [2024-12-06 15:44:58.513532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.686 [2024-12-06 15:44:58.517742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.686 [2024-12-06 15:44:58.517961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.686 [2024-12-06 15:44:58.517981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.686 [2024-12-06 15:44:58.522048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.686 [2024-12-06 15:44:58.522267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.686 [2024-12-06 15:44:58.522285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.687 [2024-12-06 15:44:58.526392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.687 [2024-12-06 15:44:58.526612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.687 [2024-12-06 15:44:58.526632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.687 [2024-12-06 15:44:58.530674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.687 [2024-12-06 15:44:58.530891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.687 [2024-12-06 15:44:58.530911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.687 [2024-12-06 15:44:58.534930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.687 [2024-12-06 15:44:58.535144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.687 [2024-12-06 15:44:58.535168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.687 [2024-12-06 15:44:58.539352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.687 [2024-12-06 15:44:58.539589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.687 [2024-12-06 15:44:58.539609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.687 [2024-12-06 15:44:58.543739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.687 [2024-12-06 15:44:58.543989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.687 [2024-12-06 15:44:58.544009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.687 [2024-12-06 15:44:58.548167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.687 [2024-12-06 15:44:58.548396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.548415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.552415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.688 [2024-12-06 15:44:58.552637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.552658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.556739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.688 [2024-12-06 15:44:58.556957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.556978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.560979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.688 [2024-12-06 15:44:58.561202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.561223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.565139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.688 [2024-12-06 15:44:58.565383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.565403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.570213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.688 [2024-12-06 15:44:58.570523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.570544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.575908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.688 [2024-12-06 15:44:58.576155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.576176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.580687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.688 [2024-12-06 15:44:58.580946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.688 [2024-12-06 15:44:58.580967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.688 [2024-12-06 15:44:58.585712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.689 [2024-12-06 15:44:58.585939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.689 [2024-12-06 15:44:58.585959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.689 [2024-12-06 15:44:58.591885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.689 [2024-12-06 15:44:58.592089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.689 [2024-12-06 15:44:58.592108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.689 [2024-12-06 15:44:58.597553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.689 [2024-12-06 15:44:58.597768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.689 [2024-12-06 15:44:58.597789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.689 [2024-12-06 15:44:58.604173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.689 [2024-12-06 15:44:58.604522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.689 [2024-12-06 15:44:58.604543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.689 [2024-12-06 15:44:58.611180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.689 [2024-12-06 15:44:58.611437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.689 [2024-12-06 15:44:58.611459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.689 [2024-12-06 15:44:58.617797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.689 [2024-12-06 15:44:58.618115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.689 [2024-12-06 15:44:58.618136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.689 [2024-12-06 15:44:58.624770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.689 [2024-12-06 15:44:58.625067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.690 [2024-12-06 15:44:58.625088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.690 [2024-12-06 15:44:58.632041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.690 [2024-12-06 15:44:58.632332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.690 [2024-12-06 15:44:58.632353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.690 [2024-12-06 15:44:58.639116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.690 [2024-12-06 15:44:58.639459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.690 [2024-12-06 15:44:58.639479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.690 [2024-12-06 15:44:58.645813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.690 [2024-12-06 15:44:58.646141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.690 [2024-12-06 15:44:58.646160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.690 [2024-12-06 15:44:58.652587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.690 [2024-12-06 15:44:58.652916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.690 [2024-12-06 15:44:58.652937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.690 [2024-12-06 15:44:58.659755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.690 [2024-12-06 15:44:58.660036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.691 [2024-12-06 15:44:58.660056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.691 [2024-12-06 15:44:58.666194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.691 [2024-12-06 15:44:58.666455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.691 [2024-12-06 15:44:58.666476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.691 [2024-12-06 15:44:58.672244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.691 [2024-12-06 15:44:58.672546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.691 [2024-12-06 15:44:58.672567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.950 [2024-12-06 15:44:58.678781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.950 [2024-12-06 15:44:58.679031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.950 [2024-12-06 15:44:58.679052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.950 [2024-12-06 15:44:58.685420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.950 [2024-12-06 15:44:58.685652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.950 [2024-12-06 15:44:58.685677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.950 [2024-12-06 15:44:58.691823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.950 [2024-12-06 15:44:58.692050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.950 [2024-12-06 15:44:58.692071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.950 [2024-12-06 15:44:58.698587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.950 [2024-12-06 15:44:58.698818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.950 [2024-12-06 15:44:58.698839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.950 [2024-12-06 15:44:58.705270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.950 [2024-12-06 15:44:58.705517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.950 [2024-12-06 15:44:58.705538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.950 [2024-12-06 15:44:58.712388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.950 [2024-12-06 15:44:58.712627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.950 [2024-12-06 15:44:58.712650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.719081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.719318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.719340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.724344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.724587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.724608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.729036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.729268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.729289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.733408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.733641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.733662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.737830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.738077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.738098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.742212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.742445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.742465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.746610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.746855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.746876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.750986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.751218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.751239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.755608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.755840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.755862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.760000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.760234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.760255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.764551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.764783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.764804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.769882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.770129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.770149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.775606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.775847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.775868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.780342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.780584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.780607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.784913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.784969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.784988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.789929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.790164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.790185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.794197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.794434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.794455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.798670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.798903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.798924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.804143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.804382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.804403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.810054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.810292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.810315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.814812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.815048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.815069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.820034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.820270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.820296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.824983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.825229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.825251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.829830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.830070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.830091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.834519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.834752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.834774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.839331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.839565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.839587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.843971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.844202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.844224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.848720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.848941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.848962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.853353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.853597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.853618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.858448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.858693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.858714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.863389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.863628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.863649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.868797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.869028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.951 [2024-12-06 15:44:58.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.951 [2024-12-06 15:44:58.874361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.951 [2024-12-06 15:44:58.874603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.874624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.880091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.880335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.880356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.885652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.885894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.885915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.891158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.891392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.891414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.896598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.896831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.896852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.901868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.902098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.902119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.906755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.906987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.907010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.911029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.911264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.911286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.915239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.915485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.915506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.919611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.919852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.919873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.923983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.924220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.924242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.928331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.928591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.928613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.932986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.933224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.933246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.937605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.937855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.937876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.952 [2024-12-06 15:44:58.942078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:52.952 [2024-12-06 15:44:58.942325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.952 [2024-12-06 15:44:58.942347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.211 [2024-12-06 15:44:58.946609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.211 [2024-12-06 15:44:58.946870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.211 [2024-12-06 15:44:58.946896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.211 [2024-12-06 15:44:58.951165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.211 [2024-12-06 15:44:58.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.211 [2024-12-06 15:44:58.951457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.211 [2024-12-06 15:44:58.955892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.211 [2024-12-06 15:44:58.956152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.211 [2024-12-06 15:44:58.956175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.211 [2024-12-06 15:44:58.961609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.211 [2024-12-06 15:44:58.961859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.211 [2024-12-06 15:44:58.961882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.211 [2024-12-06 15:44:58.967247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.211 [2024-12-06 15:44:58.967498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.211 [2024-12-06 15:44:58.967521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:58.973235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:58.973480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:58.973502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:58.978800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:58.978855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:58.978875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:58.983892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:58.984142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:58.984163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:58.988719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:58.988957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:58.988978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:58.993440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:58.993694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:58.993715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:58.998356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:58.998609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:58.998631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.003299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.003535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.003557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.007929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.008173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.008194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.012397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.012644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.012666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.017053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.017285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.017306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.021683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.021916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.021937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.027313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.027556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.027578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.032780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.033013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.033035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.038038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.038269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.038290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.043345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.043583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.043605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.048127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.048359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.048387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.052964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.053246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.053268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.057721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.057956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.057979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.062209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.062448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.062470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.066851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.067083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.067104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.071583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.071816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.071838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.076224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.076460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.076485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.080826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.081059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.081080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.085247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.085500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.085522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.089692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.089923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.089944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.094085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.094319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.094340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.212 [2024-12-06 15:44:59.098703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.212 [2024-12-06 15:44:59.098936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.212 [2024-12-06 15:44:59.098957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.103137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.103381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.103402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.107636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.107869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.107890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.112326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.112563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.112584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.117453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.117687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.117712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.122473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.122615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.122634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.127780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.128016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.132485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.132721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.132742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.137163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.137416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.137437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.142228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.142472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.142493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.147356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.147612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.147633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.153222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.153462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.153484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.158041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.158275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.158296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.163261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.163501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.163523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.168897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.169132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.169154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.174183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.174330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.174350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:53.213 [2024-12-06 15:44:59.179582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.179815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.179837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:53.213 6160.00 IOPS, 770.00 MiB/s [2024-12-06T14:44:59.211Z] [2024-12-06 15:44:59.185804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x57a270) with pdu=0x200016eff3c8 00:27:53.213 [2024-12-06 15:44:59.185861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:53.213 [2024-12-06 15:44:59.185881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.213 00:27:53.213 Latency(us) 00:27:53.213 [2024-12-06T14:44:59.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.213 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:53.213 nvme0n1 : 2.00 6155.43 769.43 0.00 0.00 2594.23 1849.05 12358.22 00:27:53.213 [2024-12-06T14:44:59.211Z] =================================================================================================================== 00:27:53.213 [2024-12-06T14:44:59.211Z] Total : 6155.43 769.43 0.00 0.00 2594.23 1849.05 12358.22 00:27:53.213 { 00:27:53.213 "results": [ 00:27:53.213 { 00:27:53.213 "job": "nvme0n1", 00:27:53.213 "core_mask": "0x2", 00:27:53.213 "workload": "randwrite", 00:27:53.213 "status": "finished", 00:27:53.213 "queue_depth": 16, 00:27:53.213 "io_size": 131072, 00:27:53.213 "runtime": 2.004571, 00:27:53.213 "iops": 6155.431760710895, 00:27:53.213 "mibps": 769.4289700888619, 00:27:53.213 "io_failed": 0, 00:27:53.213 "io_timeout": 0, 00:27:53.213 "avg_latency_us": 2594.2322877133674, 00:27:53.213 "min_latency_us": 1849.0514285714285, 00:27:53.213 "max_latency_us": 12358.217142857144 00:27:53.213 } 00:27:53.213 ], 00:27:53.213 "core_count": 1 00:27:53.213 } 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:53.508 | .driver_specific 00:27:53.508 | .nvme_error 00:27:53.508 | .status_code 00:27:53.508 | .command_transient_transport_error' 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3159568 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3159568 ']' 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3159568 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159568 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159568' 00:27:53.508 killing process with pid 3159568 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3159568 00:27:53.508 Received shutdown signal, test time was about 2.000000 seconds 00:27:53.508 00:27:53.508 Latency(us) 00:27:53.508 [2024-12-06T14:44:59.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.508 [2024-12-06T14:44:59.506Z] =================================================================================================================== 00:27:53.508 [2024-12-06T14:44:59.506Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:53.508 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3159568 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3157752 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3157752 ']' 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3157752 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157752 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157752' 00:27:53.813 killing process with pid 3157752 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3157752 00:27:53.813 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3157752 00:27:54.072 00:27:54.072 real 0m13.858s 00:27:54.072 user 0m26.536s 00:27:54.072 sys 0m4.521s 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.072 ************************************ 00:27:54.072 END TEST nvmf_digest_error 00:27:54.072 ************************************ 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:54.072 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:54.072 rmmod nvme_tcp 00:27:54.072 rmmod nvme_fabrics 00:27:54.073 rmmod nvme_keyring 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3157752 ']' 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3157752 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3157752 ']' 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3157752 00:27:54.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3157752) - No such process 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3157752 is not found' 00:27:54.073 Process with pid 3157752 is not found 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.073 15:44:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.609 00:27:56.609 real 0m36.082s 00:27:56.609 user 0m54.823s 00:27:56.609 sys 0m13.587s 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.609 ************************************ 00:27:56.609 END TEST nvmf_digest 00:27:56.609 ************************************ 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.609 ************************************ 00:27:56.609 START TEST nvmf_bdevperf 00:27:56.609 ************************************ 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:56.609 * Looking for test storage... 00:27:56.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.609 --rc genhtml_branch_coverage=1 00:27:56.609 --rc genhtml_function_coverage=1 00:27:56.609 --rc genhtml_legend=1 00:27:56.609 --rc geninfo_all_blocks=1 00:27:56.609 --rc geninfo_unexecuted_blocks=1 00:27:56.609 00:27:56.609 ' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.609 --rc genhtml_branch_coverage=1 00:27:56.609 --rc genhtml_function_coverage=1 00:27:56.609 --rc genhtml_legend=1 00:27:56.609 --rc geninfo_all_blocks=1 00:27:56.609 --rc geninfo_unexecuted_blocks=1 00:27:56.609 00:27:56.609 ' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.609 --rc genhtml_branch_coverage=1 00:27:56.609 --rc genhtml_function_coverage=1 00:27:56.609 --rc genhtml_legend=1 00:27:56.609 --rc geninfo_all_blocks=1 00:27:56.609 --rc geninfo_unexecuted_blocks=1 00:27:56.609 00:27:56.609 ' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:56.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.609 --rc genhtml_branch_coverage=1 00:27:56.609 --rc genhtml_function_coverage=1 00:27:56.609 --rc genhtml_legend=1 00:27:56.609 --rc geninfo_all_blocks=1 00:27:56.609 --rc geninfo_unexecuted_blocks=1 00:27:56.609 00:27:56.609 ' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.609 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:56.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.610 15:45:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.179 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:03.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:03.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:03.180 Found net devices under 0000:86:00.0: cvl_0_0 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:03.180 Found net devices under 0000:86:00.1: cvl_0_1 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.180 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:03.181 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:03.181 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.181 15:45:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:03.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:28:03.181 00:28:03.181 --- 10.0.0.2 ping statistics --- 00:28:03.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.181 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:03.181 00:28:03.181 --- 10.0.0.1 ping statistics --- 00:28:03.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.181 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3163706 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3163706 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3163706 ']' 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.181 [2024-12-06 15:45:08.281965] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:28:03.181 [2024-12-06 15:45:08.282010] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.181 [2024-12-06 15:45:08.359358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.181 [2024-12-06 15:45:08.403714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.181 [2024-12-06 15:45:08.403750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.181 [2024-12-06 15:45:08.403758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.181 [2024-12-06 15:45:08.403765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.181 [2024-12-06 15:45:08.403771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.181 [2024-12-06 15:45:08.405164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.181 [2024-12-06 15:45:08.405275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.181 [2024-12-06 15:45:08.405275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.181 [2024-12-06 15:45:08.550097] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.181 Malloc0 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.181 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.182 [2024-12-06 15:45:08.613112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.182 { 00:28:03.182 "params": { 00:28:03.182 "name": "Nvme$subsystem", 00:28:03.182 "trtype": "$TEST_TRANSPORT", 00:28:03.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.182 "adrfam": "ipv4", 00:28:03.182 "trsvcid": "$NVMF_PORT", 00:28:03.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.182 "hdgst": ${hdgst:-false}, 00:28:03.182 "ddgst": ${ddgst:-false} 00:28:03.182 }, 00:28:03.182 "method": "bdev_nvme_attach_controller" 00:28:03.182 } 00:28:03.182 EOF 00:28:03.182 )") 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:03.182 15:45:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:03.182 "params": { 00:28:03.182 "name": "Nvme1", 00:28:03.182 "trtype": "tcp", 00:28:03.182 "traddr": "10.0.0.2", 00:28:03.182 "adrfam": "ipv4", 00:28:03.182 "trsvcid": "4420", 00:28:03.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.182 "hdgst": false, 00:28:03.182 "ddgst": false 00:28:03.182 }, 00:28:03.182 "method": "bdev_nvme_attach_controller" 00:28:03.182 }' 00:28:03.182 [2024-12-06 15:45:08.666337] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:28:03.182 [2024-12-06 15:45:08.666386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163866 ] 00:28:03.182 [2024-12-06 15:45:08.743154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.182 [2024-12-06 15:45:08.784040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.182 Running I/O for 1 seconds... 00:28:04.118 11371.00 IOPS, 44.42 MiB/s 00:28:04.118 Latency(us) 00:28:04.118 [2024-12-06T14:45:10.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.118 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:04.118 Verification LBA range: start 0x0 length 0x4000 00:28:04.118 Nvme1n1 : 1.00 11453.23 44.74 0.00 0.00 11131.44 1490.16 12233.39 00:28:04.118 [2024-12-06T14:45:10.116Z] =================================================================================================================== 00:28:04.118 [2024-12-06T14:45:10.116Z] Total : 11453.23 44.74 0.00 0.00 11131.44 1490.16 12233.39 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3164340 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:04.377 { 00:28:04.377 "params": { 00:28:04.377 "name": "Nvme$subsystem", 00:28:04.377 "trtype": "$TEST_TRANSPORT", 00:28:04.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.377 "adrfam": "ipv4", 00:28:04.377 "trsvcid": "$NVMF_PORT", 00:28:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.377 "hdgst": ${hdgst:-false}, 00:28:04.377 "ddgst": ${ddgst:-false} 00:28:04.377 }, 00:28:04.377 "method": "bdev_nvme_attach_controller" 00:28:04.377 } 00:28:04.377 EOF 00:28:04.377 )") 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:04.377 15:45:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:04.377 "params": { 00:28:04.377 "name": "Nvme1", 00:28:04.377 "trtype": "tcp", 00:28:04.377 "traddr": "10.0.0.2", 00:28:04.377 "adrfam": "ipv4", 00:28:04.377 "trsvcid": "4420", 00:28:04.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.377 "hdgst": false, 00:28:04.377 "ddgst": false 00:28:04.377 }, 00:28:04.377 "method": "bdev_nvme_attach_controller" 00:28:04.377 }' 00:28:04.377 [2024-12-06 15:45:10.200770] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:28:04.377 [2024-12-06 15:45:10.200820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164340 ] 00:28:04.377 [2024-12-06 15:45:10.278469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.377 [2024-12-06 15:45:10.316885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.636 Running I/O for 15 seconds... 00:28:06.970 11369.00 IOPS, 44.41 MiB/s [2024-12-06T14:45:13.229Z] 11439.00 IOPS, 44.68 MiB/s [2024-12-06T14:45:13.229Z] 15:45:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3163706 00:28:07.231 15:45:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:07.231 [2024-12-06 15:45:13.175387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.231 [2024-12-06 15:45:13.175433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.231 [2024-12-06 15:45:13.175739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.231 [2024-12-06 15:45:13.175746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.175986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.175997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.232 [2024-12-06 15:45:13.176430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.232 [2024-12-06 15:45:13.176437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.233 [2024-12-06 15:45:13.176975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.233 [2024-12-06 15:45:13.176984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.176990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.176998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.234 [2024-12-06 15:45:13.177292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.234 [2024-12-06 15:45:13.177514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.177522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81410 is same with the state(6) to be set 00:28:07.234 [2024-12-06 15:45:13.177531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:07.234 [2024-12-06 15:45:13.177537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:07.234 [2024-12-06 15:45:13.177543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103736 len:8 PRP1 0x0 PRP2 0x0 00:28:07.234 [2024-12-06 15:45:13.177551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:07.234 [2024-12-06 15:45:13.180354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.234 [2024-12-06 15:45:13.180414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.234 [2024-12-06 15:45:13.181017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.234 [2024-12-06 15:45:13.181034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.234 [2024-12-06 15:45:13.181042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.234 [2024-12-06 15:45:13.181211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.234 [2024-12-06 15:45:13.181389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-06 15:45:13.181399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-06 15:45:13.181408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-06 15:45:13.181416] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.235 [2024-12-06 15:45:13.193489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.235 [2024-12-06 15:45:13.193912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.235 [2024-12-06 15:45:13.193930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.235 [2024-12-06 15:45:13.193939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.235 [2024-12-06 15:45:13.194099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.235 [2024-12-06 15:45:13.194262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-06 15:45:13.194271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-06 15:45:13.194278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-06 15:45:13.194286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.235 [2024-12-06 15:45:13.206279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.235 [2024-12-06 15:45:13.206700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.235 [2024-12-06 15:45:13.206747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.235 [2024-12-06 15:45:13.206772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.235 [2024-12-06 15:45:13.207154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.235 [2024-12-06 15:45:13.207315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-06 15:45:13.207324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-06 15:45:13.207331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-06 15:45:13.207337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.235 [2024-12-06 15:45:13.219113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.235 [2024-12-06 15:45:13.219511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.235 [2024-12-06 15:45:13.219529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.235 [2024-12-06 15:45:13.219537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.235 [2024-12-06 15:45:13.219696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.235 [2024-12-06 15:45:13.219857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.235 [2024-12-06 15:45:13.219867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.235 [2024-12-06 15:45:13.219873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.235 [2024-12-06 15:45:13.219880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.494 [2024-12-06 15:45:13.232027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.494 [2024-12-06 15:45:13.232375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.494 [2024-12-06 15:45:13.232393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.494 [2024-12-06 15:45:13.232402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.494 [2024-12-06 15:45:13.232579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.494 [2024-12-06 15:45:13.232753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.494 [2024-12-06 15:45:13.232763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.494 [2024-12-06 15:45:13.232769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.494 [2024-12-06 15:45:13.232776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.494 [2024-12-06 15:45:13.244894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.494 [2024-12-06 15:45:13.245200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.494 [2024-12-06 15:45:13.245216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.494 [2024-12-06 15:45:13.245224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.494 [2024-12-06 15:45:13.245389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.494 [2024-12-06 15:45:13.245549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.494 [2024-12-06 15:45:13.245559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.494 [2024-12-06 15:45:13.245565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.494 [2024-12-06 15:45:13.245571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.494 [2024-12-06 15:45:13.257667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.494 [2024-12-06 15:45:13.258508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.494 [2024-12-06 15:45:13.258531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.494 [2024-12-06 15:45:13.258540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.494 [2024-12-06 15:45:13.258708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.494 [2024-12-06 15:45:13.258869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.494 [2024-12-06 15:45:13.258878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.494 [2024-12-06 15:45:13.258885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.494 [2024-12-06 15:45:13.258892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.494 [2024-12-06 15:45:13.270439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.494 [2024-12-06 15:45:13.270815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.494 [2024-12-06 15:45:13.270833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.494 [2024-12-06 15:45:13.270841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.494 [2024-12-06 15:45:13.271001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.494 [2024-12-06 15:45:13.271162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.494 [2024-12-06 15:45:13.271174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.494 [2024-12-06 15:45:13.271181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.494 [2024-12-06 15:45:13.271187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.494 [2024-12-06 15:45:13.283426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.494 [2024-12-06 15:45:13.283792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.494 [2024-12-06 15:45:13.283811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.494 [2024-12-06 15:45:13.283819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.494 [2024-12-06 15:45:13.283993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.494 [2024-12-06 15:45:13.284166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.494 [2024-12-06 15:45:13.284176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.494 [2024-12-06 15:45:13.284183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.494 [2024-12-06 15:45:13.284191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.494 [2024-12-06 15:45:13.296489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.494 [2024-12-06 15:45:13.296750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.494 [2024-12-06 15:45:13.296769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.494 [2024-12-06 15:45:13.296777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.494 [2024-12-06 15:45:13.296951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.494 [2024-12-06 15:45:13.297125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.494 [2024-12-06 15:45:13.297135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.494 [2024-12-06 15:45:13.297142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.494 [2024-12-06 15:45:13.297150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.494 [2024-12-06 15:45:13.309560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.494 [2024-12-06 15:45:13.309849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.494 [2024-12-06 15:45:13.309867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.494 [2024-12-06 15:45:13.309875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.494 [2024-12-06 15:45:13.310048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.494 [2024-12-06 15:45:13.310222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.310232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.310239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.310250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.322654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.323066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.323110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.323134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.323732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.324299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.324309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.324316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.324323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.335597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.335975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.336031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.336056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.336652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.337242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.337270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.337277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.337284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.350888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.351263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.351286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.351297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.351559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.351816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.351830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.351840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.351849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.363892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.364171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.364193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.364201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.364376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.364546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.364555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.364561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.364568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.376743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.377009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.377027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.377034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.377193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.377352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.377362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.377374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.377381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.389510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.389791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.389835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.389859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.390388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.390550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.390559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.390566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.390572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.402306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.402576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.402594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.402602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.402766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.402926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.402936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.402942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.402949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.415100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.415465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.415483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.415492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.415652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.415812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.415822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.415828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.415834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.428012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.428345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.428363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.428378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.428546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.495 [2024-12-06 15:45:13.428715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.495 [2024-12-06 15:45:13.428725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.495 [2024-12-06 15:45:13.428731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.495 [2024-12-06 15:45:13.428738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.495 [2024-12-06 15:45:13.441055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.495 [2024-12-06 15:45:13.441400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.495 [2024-12-06 15:45:13.441418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.495 [2024-12-06 15:45:13.441427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.495 [2024-12-06 15:45:13.441600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.496 [2024-12-06 15:45:13.441774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-06 15:45:13.441787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-06 15:45:13.441794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-06 15:45:13.441801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-06 15:45:13.454030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-06 15:45:13.454413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-06 15:45:13.454431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-06 15:45:13.454439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.496 [2024-12-06 15:45:13.454613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.496 [2024-12-06 15:45:13.454787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-06 15:45:13.454797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-06 15:45:13.454804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-06 15:45:13.454811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-06 15:45:13.467017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-06 15:45:13.467377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-06 15:45:13.467395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-06 15:45:13.467404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.496 [2024-12-06 15:45:13.467578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.496 [2024-12-06 15:45:13.467757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-06 15:45:13.467767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-06 15:45:13.467774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-06 15:45:13.467780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.496 [2024-12-06 15:45:13.479970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.496 [2024-12-06 15:45:13.480261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.496 [2024-12-06 15:45:13.480279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.496 [2024-12-06 15:45:13.480287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.496 [2024-12-06 15:45:13.480460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.496 [2024-12-06 15:45:13.480630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.496 [2024-12-06 15:45:13.480639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.496 [2024-12-06 15:45:13.480646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.496 [2024-12-06 15:45:13.480655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.755 [2024-12-06 15:45:13.492874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.755 [2024-12-06 15:45:13.493157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-12-06 15:45:13.493175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.755 [2024-12-06 15:45:13.493184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.755 [2024-12-06 15:45:13.493356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.755 [2024-12-06 15:45:13.493535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.755 [2024-12-06 15:45:13.493545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.755 [2024-12-06 15:45:13.493552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.755 [2024-12-06 15:45:13.493558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.755 [2024-12-06 15:45:13.505686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.755 [2024-12-06 15:45:13.505962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-12-06 15:45:13.505978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.755 [2024-12-06 15:45:13.505986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.755 [2024-12-06 15:45:13.506145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.755 [2024-12-06 15:45:13.506305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.755 [2024-12-06 15:45:13.506314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.755 [2024-12-06 15:45:13.506320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.755 [2024-12-06 15:45:13.506327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.755 [2024-12-06 15:45:13.518477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.755 [2024-12-06 15:45:13.518785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.755 [2024-12-06 15:45:13.518802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.518810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.518970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.519130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.519138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.519145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.519152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.531383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.531661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.531681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.531689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.531848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.532008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.532017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.532023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.532030] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.544171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.544494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.544512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.544520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.544680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.544840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.544849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.544855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.544862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.556980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.557250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.557267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.557274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.557439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.557599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.557609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.557615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.557621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.569751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.570040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.570083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.570107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.570670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.570833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.570842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.570849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.570855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.582543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.582813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.582830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.582837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.582996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.583156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.583166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.583172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.583178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.595317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.595664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.595711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.595735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.596319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.596927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.596937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.596943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.596950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 9778.67 IOPS, 38.20 MiB/s [2024-12-06T14:45:13.754Z] [2024-12-06 15:45:13.609346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.609707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.609753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.609778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.610249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.610415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.610427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.610434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.610440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.622189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.622572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.622590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.622598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.622766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.622934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.622943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.622950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.622957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.634922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.635240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.756 [2024-12-06 15:45:13.635257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.756 [2024-12-06 15:45:13.635264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.756 [2024-12-06 15:45:13.635429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.756 [2024-12-06 15:45:13.635590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.756 [2024-12-06 15:45:13.635599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.756 [2024-12-06 15:45:13.635605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.756 [2024-12-06 15:45:13.635611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.756 [2024-12-06 15:45:13.647742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.756 [2024-12-06 15:45:13.647999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.648016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.648024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.648184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.648343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.648352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.648358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.648366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-06 15:45:13.660507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-06 15:45:13.660879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.660923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.660947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.661544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.661964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.661982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.661996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.662010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-06 15:45:13.675394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-06 15:45:13.675896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.675942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.675965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.676418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.676677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.676690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.676699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.676709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-06 15:45:13.688295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-06 15:45:13.688676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.688694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.688702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.688870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.689039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.689048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.689055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.689062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-06 15:45:13.701326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-06 15:45:13.701731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.701752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.701760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.701933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.702106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.702116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.702123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.702130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-06 15:45:13.714352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-06 15:45:13.714735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.714752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.714760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.714930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.715099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.715108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.715115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.715122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-06 15:45:13.727108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-06 15:45:13.727495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.727513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.727520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.727679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.727839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.727848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.727855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.727861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.757 [2024-12-06 15:45:13.739932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.757 [2024-12-06 15:45:13.740243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.757 [2024-12-06 15:45:13.740260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:07.757 [2024-12-06 15:45:13.740268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:07.757 [2024-12-06 15:45:13.740439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:07.757 [2024-12-06 15:45:13.740601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.757 [2024-12-06 15:45:13.740611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.757 [2024-12-06 15:45:13.740618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.757 [2024-12-06 15:45:13.740625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.016 [2024-12-06 15:45:13.752883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.016 [2024-12-06 15:45:13.753288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.016 [2024-12-06 15:45:13.753304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.016 [2024-12-06 15:45:13.753311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.016 [2024-12-06 15:45:13.753474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.016 [2024-12-06 15:45:13.753635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.016 [2024-12-06 15:45:13.753644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.016 [2024-12-06 15:45:13.753651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.016 [2024-12-06 15:45:13.753657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.016 [2024-12-06 15:45:13.765730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.016 [2024-12-06 15:45:13.766127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.016 [2024-12-06 15:45:13.766145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.016 [2024-12-06 15:45:13.766153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.016 [2024-12-06 15:45:13.766313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.016 [2024-12-06 15:45:13.766477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.016 [2024-12-06 15:45:13.766487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.016 [2024-12-06 15:45:13.766493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.016 [2024-12-06 15:45:13.766499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.016 [2024-12-06 15:45:13.778655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.016 [2024-12-06 15:45:13.779034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.016 [2024-12-06 15:45:13.779052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.016 [2024-12-06 15:45:13.779060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.016 [2024-12-06 15:45:13.779228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.016 [2024-12-06 15:45:13.779402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.016 [2024-12-06 15:45:13.779412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.016 [2024-12-06 15:45:13.779423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.016 [2024-12-06 15:45:13.779430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.016 [2024-12-06 15:45:13.791441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.016 [2024-12-06 15:45:13.791831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.016 [2024-12-06 15:45:13.791848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.016 [2024-12-06 15:45:13.791855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.016 [2024-12-06 15:45:13.792015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.016 [2024-12-06 15:45:13.792174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.016 [2024-12-06 15:45:13.792184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.016 [2024-12-06 15:45:13.792190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.016 [2024-12-06 15:45:13.792196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.016 [2024-12-06 15:45:13.804187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.804554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.804599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.804624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.805132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.805294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.805303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.805310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.805316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.816993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.817329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.817385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.817411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.817993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.818550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.818559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.818566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.818573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.829779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.830171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.830187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.830195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.830353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.830541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.830551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.830558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.830565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.842560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.842952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.842969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.842976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.843135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.843295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.843304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.843310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.843316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.855425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.855828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.855873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.855898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.856406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.856569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.856583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.856590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.856596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.868265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.868661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.868684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.868692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.868853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.869012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.869022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.869028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.869034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.881020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.881405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.881423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.881431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.881590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.881751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.881760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.881767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.881773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.893893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.894209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.894227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.894235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.894403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.894563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.894572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.894579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.894586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.017 [2024-12-06 15:45:13.906719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.017 [2024-12-06 15:45:13.907111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.017 [2024-12-06 15:45:13.907128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.017 [2024-12-06 15:45:13.907135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.017 [2024-12-06 15:45:13.907295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.017 [2024-12-06 15:45:13.907465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.017 [2024-12-06 15:45:13.907475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.017 [2024-12-06 15:45:13.907481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.017 [2024-12-06 15:45:13.907487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:13.919448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:13.919792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:13.919810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:13.919817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:13.919976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:13.920136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:13.920146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:13.920152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:13.920158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:13.932319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:13.932709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:13.932726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:13.932733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:13.932892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:13.933053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:13.933062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:13.933069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:13.933075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:13.945047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:13.945436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:13.945455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:13.945463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:13.945631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:13.945800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:13.945810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:13.945820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:13.945828] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:13.958102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:13.958481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:13.958499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:13.958508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:13.958681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:13.958856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:13.958866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:13.958872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:13.958879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:13.971063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:13.971450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:13.971467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:13.971475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:13.971634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:13.971794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:13.971803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:13.971810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:13.971817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:13.983937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:13.984340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:13.984357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:13.984365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:13.984541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:13.984709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:13.984719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:13.984726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:13.984733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:13.996753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:13.997131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:13.997148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:13.997156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:13.997315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:13.997482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:13.997492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:13.997498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:13.997505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.018 [2024-12-06 15:45:14.009719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.018 [2024-12-06 15:45:14.010127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.018 [2024-12-06 15:45:14.010145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.018 [2024-12-06 15:45:14.010153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.018 [2024-12-06 15:45:14.010327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.018 [2024-12-06 15:45:14.010508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.018 [2024-12-06 15:45:14.010519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.018 [2024-12-06 15:45:14.010525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.018 [2024-12-06 15:45:14.010532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.022729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.023058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.023076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.023083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.023251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.023427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.023437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.023456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.023463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.035585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.035953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.035971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.035981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.036142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.036302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.036311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.036317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.036323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.048445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.048834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.048850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.048858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.049040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.049208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.049219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.049227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.049234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.061243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.061648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.061696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.061722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.062153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.062314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.062323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.062330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.062337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.074016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.074408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.074426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.074434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.074595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.074758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.074768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.074775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.074781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.086764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.087127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.087144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.087151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.087310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.087478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.087488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.087494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.087500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.099595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.099986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.100011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.100169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.100329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.100338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.100344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.100350] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.112324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.112710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.112728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.112735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.112895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.113054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.113064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.113074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.278 [2024-12-06 15:45:14.113081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.278 [2024-12-06 15:45:14.125059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.278 [2024-12-06 15:45:14.125442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.278 [2024-12-06 15:45:14.125460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.278 [2024-12-06 15:45:14.125467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.278 [2024-12-06 15:45:14.125626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.278 [2024-12-06 15:45:14.125786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.278 [2024-12-06 15:45:14.125795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.278 [2024-12-06 15:45:14.125801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.125808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.137887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.138272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.138290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.138297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.138462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.138622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.138632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.138638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.138644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.150623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.150934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.150951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.150959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.151119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.151279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.151288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.151296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.151302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.163524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.163913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.163930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.163937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.164096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.164255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.164265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.164271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.164278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.176341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.176717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.176734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.176742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.176901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.177060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.177069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.177076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.177081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.189185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.189557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.189575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.189582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.189741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.189901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.189910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.189916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.189922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.202050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.202437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.202456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.202467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.202635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.202804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.202814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.202820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.202827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.215172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.215556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.215574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.215582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.215756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.215931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.215941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.215948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.215954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.228065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.228453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.228470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.228477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.228636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.228796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.228806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.228813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.228819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.240875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.241274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.241318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.241342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.241784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.241949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.241957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.241963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.279 [2024-12-06 15:45:14.241969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.279 [2024-12-06 15:45:14.253732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.279 [2024-12-06 15:45:14.254120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.279 [2024-12-06 15:45:14.254137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.279 [2024-12-06 15:45:14.254145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.279 [2024-12-06 15:45:14.254304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.279 [2024-12-06 15:45:14.254471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.279 [2024-12-06 15:45:14.254481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.279 [2024-12-06 15:45:14.254487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-06 15:45:14.254494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.280 [2024-12-06 15:45:14.266457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.280 [2024-12-06 15:45:14.266811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.280 [2024-12-06 15:45:14.266828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.280 [2024-12-06 15:45:14.266836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.280 [2024-12-06 15:45:14.266995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.280 [2024-12-06 15:45:14.267156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.280 [2024-12-06 15:45:14.267165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.280 [2024-12-06 15:45:14.267171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.280 [2024-12-06 15:45:14.267178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.279524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.279940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.279957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.279965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.280124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.280284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.280293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.280303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.280310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.292371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.292755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.292772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.292779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.292938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.293098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.293107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.293114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.293120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.305477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.305857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.305876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.305884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.306058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.306232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.306242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.306248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.306255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.318481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.318901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.318919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.318927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.319100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.319275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.319285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.319292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.319299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.331542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.331885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.331903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.331910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.332083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.332257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.332267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.332274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.332281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.344482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.344887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.344904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.344911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.345079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.345248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.345258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.345265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.345272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.357338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.357684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.357701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.357708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.357867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.358028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.358037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.358044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.358051] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.370176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.370580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.370625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.370657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.371106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.371267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.371275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.371281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.371287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.382961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.383299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.383316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.383324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.383489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.383649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.540 [2024-12-06 15:45:14.383659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.540 [2024-12-06 15:45:14.383666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.540 [2024-12-06 15:45:14.383672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.540 [2024-12-06 15:45:14.395789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.540 [2024-12-06 15:45:14.396156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.540 [2024-12-06 15:45:14.396173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.540 [2024-12-06 15:45:14.396181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.540 [2024-12-06 15:45:14.396340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.540 [2024-12-06 15:45:14.396505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.396514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.396521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.396527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.408659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.409050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.409095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.409119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.409720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.410081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.410090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.410097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.410103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.421469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.421830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.421848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.421855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.422015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.422175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.422184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.422190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.422197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.434373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.434765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.434781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.434789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.434948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.435108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.435117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.435124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.435130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.447099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.447466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.447483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.447490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.447649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.447809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.447818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.447824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.447834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.459897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.460304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.460322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.460330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.460506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.460676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.460685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.460692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.460699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.472984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.473383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.473402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.473410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.473583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.473757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.473767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.473773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.473780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.485893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.486313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.486359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.486400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.486878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.487047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.487057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.487064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.487071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.498887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.499298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.499341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.499365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.499906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.500304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.500325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.500340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.500354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.513772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.514264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.514320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.514343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.514942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.541 [2024-12-06 15:45:14.515456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.541 [2024-12-06 15:45:14.515469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.541 [2024-12-06 15:45:14.515479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.541 [2024-12-06 15:45:14.515488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.541 [2024-12-06 15:45:14.526743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.541 [2024-12-06 15:45:14.527135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.541 [2024-12-06 15:45:14.527153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.541 [2024-12-06 15:45:14.527161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.541 [2024-12-06 15:45:14.527329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.542 [2024-12-06 15:45:14.527505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.542 [2024-12-06 15:45:14.527515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.542 [2024-12-06 15:45:14.527522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.542 [2024-12-06 15:45:14.527529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.800 [2024-12-06 15:45:14.539737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.800 [2024-12-06 15:45:14.540062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.800 [2024-12-06 15:45:14.540080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.800 [2024-12-06 15:45:14.540088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.800 [2024-12-06 15:45:14.540266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.800 [2024-12-06 15:45:14.540448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.800 [2024-12-06 15:45:14.540460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.800 [2024-12-06 15:45:14.540466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.800 [2024-12-06 15:45:14.540474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.800 [2024-12-06 15:45:14.552527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.800 [2024-12-06 15:45:14.552914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.800 [2024-12-06 15:45:14.552931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.800 [2024-12-06 15:45:14.552939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.800 [2024-12-06 15:45:14.553098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.800 [2024-12-06 15:45:14.553259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.800 [2024-12-06 15:45:14.553269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.800 [2024-12-06 15:45:14.553276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.800 [2024-12-06 15:45:14.553282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.800 [2024-12-06 15:45:14.565255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.800 [2024-12-06 15:45:14.565653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.800 [2024-12-06 15:45:14.565698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.800 [2024-12-06 15:45:14.565722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.800 [2024-12-06 15:45:14.566138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.800 [2024-12-06 15:45:14.566298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.800 [2024-12-06 15:45:14.566307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.800 [2024-12-06 15:45:14.566314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.800 [2024-12-06 15:45:14.566320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.800 [2024-12-06 15:45:14.577988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.800 [2024-12-06 15:45:14.578377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.800 [2024-12-06 15:45:14.578425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.800 [2024-12-06 15:45:14.578449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.800 [2024-12-06 15:45:14.579034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.800 [2024-12-06 15:45:14.579306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.800 [2024-12-06 15:45:14.579318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.800 [2024-12-06 15:45:14.579325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.800 [2024-12-06 15:45:14.579331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.800 [2024-12-06 15:45:14.590861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.800 [2024-12-06 15:45:14.591256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.800 [2024-12-06 15:45:14.591272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.800 [2024-12-06 15:45:14.591279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.800 [2024-12-06 15:45:14.591446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.800 [2024-12-06 15:45:14.591607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.800 [2024-12-06 15:45:14.591616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.800 [2024-12-06 15:45:14.591622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.800 [2024-12-06 15:45:14.591628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.800 [2024-12-06 15:45:14.603596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.800 [2024-12-06 15:45:14.603913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.800 [2024-12-06 15:45:14.603957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.800 [2024-12-06 15:45:14.603981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.800 [2024-12-06 15:45:14.604441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.800 [2024-12-06 15:45:14.604604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.800 [2024-12-06 15:45:14.604614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.800 [2024-12-06 15:45:14.604622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.800 [2024-12-06 15:45:14.604629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.800 7334.00 IOPS, 28.65 MiB/s [2024-12-06T14:45:14.798Z] [2024-12-06 15:45:14.616410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.616804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.616821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.616828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.616986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.617146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.617156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.617162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.617172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.629285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.629655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.629673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.629681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.629840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.630000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.630009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.630016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.630022] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.642088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.642477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.642494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.642503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.642662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.642821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.642831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.642837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.642843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.654933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.655295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.655342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.655378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.655964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.656532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.656542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.656548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.656556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.667779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.668178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.668195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.668203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.668363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.668530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.668539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.668545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.668552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.680560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.680932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.680949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.680956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.681115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.681275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.681284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.681291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.681297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.693419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.693711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.693756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.693780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.694380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.694558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.694568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.694574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.694581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.706418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.706771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.706817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.706840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.707443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.707912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.707922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.707929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.707936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.719455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.719793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.719811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.719819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.719993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.720166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.720177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.720184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.720191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.732477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.801 [2024-12-06 15:45:14.732760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.801 [2024-12-06 15:45:14.732778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.801 [2024-12-06 15:45:14.732786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.801 [2024-12-06 15:45:14.732960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.801 [2024-12-06 15:45:14.733134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.801 [2024-12-06 15:45:14.733144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.801 [2024-12-06 15:45:14.733151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.801 [2024-12-06 15:45:14.733159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.801 [2024-12-06 15:45:14.745425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-06 15:45:14.745748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-06 15:45:14.745765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-06 15:45:14.745772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.802 [2024-12-06 15:45:14.745940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.802 [2024-12-06 15:45:14.746109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-06 15:45:14.746122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-06 15:45:14.746129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-06 15:45:14.746135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-06 15:45:14.758259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-06 15:45:14.758542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-06 15:45:14.758560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-06 15:45:14.758568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.802 [2024-12-06 15:45:14.758727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.802 [2024-12-06 15:45:14.758886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-06 15:45:14.758896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-06 15:45:14.758902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-06 15:45:14.758908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-06 15:45:14.771053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-06 15:45:14.771504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-06 15:45:14.771552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-06 15:45:14.771576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.802 [2024-12-06 15:45:14.772159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.802 [2024-12-06 15:45:14.772757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-06 15:45:14.772793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-06 15:45:14.772799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-06 15:45:14.772806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.802 [2024-12-06 15:45:14.786258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.802 [2024-12-06 15:45:14.786638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.802 [2024-12-06 15:45:14.786660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:08.802 [2024-12-06 15:45:14.786672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:08.802 [2024-12-06 15:45:14.786927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:08.802 [2024-12-06 15:45:14.787184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.802 [2024-12-06 15:45:14.787198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.802 [2024-12-06 15:45:14.787208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.802 [2024-12-06 15:45:14.787223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.799397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.799687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.799705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.799712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.799886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.800060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.800070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.062 [2024-12-06 15:45:14.800078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.062 [2024-12-06 15:45:14.800085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.812261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.812535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.812552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.812559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.812718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.812878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.812887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.062 [2024-12-06 15:45:14.812894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.062 [2024-12-06 15:45:14.812900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.825163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.825436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.825453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.825461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.825620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.825780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.825789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.062 [2024-12-06 15:45:14.825795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.062 [2024-12-06 15:45:14.825801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.837897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.838297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.838314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.838321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.838485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.838646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.838655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.062 [2024-12-06 15:45:14.838662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.062 [2024-12-06 15:45:14.838668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.850749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.851168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.851203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.851229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.851799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.851961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.851970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.062 [2024-12-06 15:45:14.851977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.062 [2024-12-06 15:45:14.851983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.863541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.863856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.863873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.863880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.864039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.864200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.864210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.062 [2024-12-06 15:45:14.864216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.062 [2024-12-06 15:45:14.864223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.876374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.876643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.876660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.876666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.876829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.876989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.876999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.062 [2024-12-06 15:45:14.877006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.062 [2024-12-06 15:45:14.877012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.062 [2024-12-06 15:45:14.889166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.062 [2024-12-06 15:45:14.889566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.062 [2024-12-06 15:45:14.889584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.062 [2024-12-06 15:45:14.889592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.062 [2024-12-06 15:45:14.889751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.062 [2024-12-06 15:45:14.889911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.062 [2024-12-06 15:45:14.889920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.889926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.889933] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.901934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.902218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.902235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.902242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.902425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.902595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.902605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.902612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.902618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.914977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.915382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.915401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.915409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.915581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.915755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.915768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.915775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.915782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.928019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.928420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.928438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.928446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.928619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.928792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.928802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.928809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.928817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.941108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.941453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.941473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.941481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.941666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.941849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.941860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.941867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.941874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.954176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.954588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.954606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.954614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.954787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.954961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.954972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.954978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.954989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.967403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.967799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.967817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.967826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.968009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.968194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.968205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.968212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.968219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.980627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.981044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.981062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.981070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.981253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.981445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.981456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.981463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.981471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:14.993871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:14.994272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:14.994290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:14.994298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:14.994488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:14.994672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:14.994682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:14.994689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:14.994696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:15.006959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:15.007345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:15.007366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:15.007382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:15.007555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:15.007729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.063 [2024-12-06 15:45:15.007738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.063 [2024-12-06 15:45:15.007745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.063 [2024-12-06 15:45:15.007752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.063 [2024-12-06 15:45:15.019931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.063 [2024-12-06 15:45:15.020214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.063 [2024-12-06 15:45:15.020232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.063 [2024-12-06 15:45:15.020240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.063 [2024-12-06 15:45:15.020414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.063 [2024-12-06 15:45:15.020583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.064 [2024-12-06 15:45:15.020593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.064 [2024-12-06 15:45:15.020599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.064 [2024-12-06 15:45:15.020606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.064 [2024-12-06 15:45:15.032857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.064 [2024-12-06 15:45:15.033196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.064 [2024-12-06 15:45:15.033239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.064 [2024-12-06 15:45:15.033264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.064 [2024-12-06 15:45:15.033859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.064 [2024-12-06 15:45:15.034402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.064 [2024-12-06 15:45:15.034412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.064 [2024-12-06 15:45:15.034419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.064 [2024-12-06 15:45:15.034426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.064 [2024-12-06 15:45:15.045656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.064 [2024-12-06 15:45:15.045910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.064 [2024-12-06 15:45:15.045927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.064 [2024-12-06 15:45:15.045935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.064 [2024-12-06 15:45:15.046098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.064 [2024-12-06 15:45:15.046258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.064 [2024-12-06 15:45:15.046268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.064 [2024-12-06 15:45:15.046274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.064 [2024-12-06 15:45:15.046281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.323 [2024-12-06 15:45:15.058683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.323 [2024-12-06 15:45:15.059122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.323 [2024-12-06 15:45:15.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.323 [2024-12-06 15:45:15.059149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.323 [2024-12-06 15:45:15.059322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.323 [2024-12-06 15:45:15.059505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.323 [2024-12-06 15:45:15.059516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.323 [2024-12-06 15:45:15.059523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.323 [2024-12-06 15:45:15.059530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.323 [2024-12-06 15:45:15.071782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.323 [2024-12-06 15:45:15.072073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.323 [2024-12-06 15:45:15.072092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.323 [2024-12-06 15:45:15.072100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.323 [2024-12-06 15:45:15.072274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.323 [2024-12-06 15:45:15.072456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.323 [2024-12-06 15:45:15.072467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.323 [2024-12-06 15:45:15.072474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.323 [2024-12-06 15:45:15.072481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.323 [2024-12-06 15:45:15.084693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.323 [2024-12-06 15:45:15.085037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.323 [2024-12-06 15:45:15.085083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.323 [2024-12-06 15:45:15.085107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.323 [2024-12-06 15:45:15.085589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.323 [2024-12-06 15:45:15.085759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.323 [2024-12-06 15:45:15.085772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.323 [2024-12-06 15:45:15.085779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.323 [2024-12-06 15:45:15.085786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.323 [2024-12-06 15:45:15.099598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.323 [2024-12-06 15:45:15.100029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.323 [2024-12-06 15:45:15.100074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.323 [2024-12-06 15:45:15.100098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.323 [2024-12-06 15:45:15.100635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.323 [2024-12-06 15:45:15.100894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.323 [2024-12-06 15:45:15.100907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.323 [2024-12-06 15:45:15.100918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.323 [2024-12-06 15:45:15.100927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.323 [2024-12-06 15:45:15.112522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.323 [2024-12-06 15:45:15.112941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.323 [2024-12-06 15:45:15.112985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.323 [2024-12-06 15:45:15.113009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.323 [2024-12-06 15:45:15.113459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.323 [2024-12-06 15:45:15.113630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.323 [2024-12-06 15:45:15.113639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.323 [2024-12-06 15:45:15.113646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.113652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.125322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.125667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.125685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.125693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.125853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.126013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.126022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.126028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.126035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.138169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.138581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.138627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.138651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.139234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.139751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.139760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.139766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.139773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.150928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.151288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.151333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.151357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.151860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.152022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.152031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.152037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.152044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.163663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.163992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.164008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.164015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.164175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.164335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.164344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.164350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.164357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.176479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.176847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.176866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.176873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.177033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.177193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.177201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.177207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.177214] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.189351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.189608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.189626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.189633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.189791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.189951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.189960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.189966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.189972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.202104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.202505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.202552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.202576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.202983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.203144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.203154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.203160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.203167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.214966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.215391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.215439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.215464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.215925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.216087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.216096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.216102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.216108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.227729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.228115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.228133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.228140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.228299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.228465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.228475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.228481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.228488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.240545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.240875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.240893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.240900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.241068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.241236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.241246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.241252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.241259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.324 [2024-12-06 15:45:15.253588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.324 [2024-12-06 15:45:15.253974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.324 [2024-12-06 15:45:15.253991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.324 [2024-12-06 15:45:15.253999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.324 [2024-12-06 15:45:15.254173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.324 [2024-12-06 15:45:15.254347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.324 [2024-12-06 15:45:15.254356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.324 [2024-12-06 15:45:15.254373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.324 [2024-12-06 15:45:15.254380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.325 [2024-12-06 15:45:15.266404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.325 [2024-12-06 15:45:15.266814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.325 [2024-12-06 15:45:15.266860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.325 [2024-12-06 15:45:15.266884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.325 [2024-12-06 15:45:15.267483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.325 [2024-12-06 15:45:15.268073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.325 [2024-12-06 15:45:15.268082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.325 [2024-12-06 15:45:15.268088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.325 [2024-12-06 15:45:15.268094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.325 [2024-12-06 15:45:15.279151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.325 [2024-12-06 15:45:15.279535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.325 [2024-12-06 15:45:15.279553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.325 [2024-12-06 15:45:15.279561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.325 [2024-12-06 15:45:15.279719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.325 [2024-12-06 15:45:15.279879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.325 [2024-12-06 15:45:15.279888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.325 [2024-12-06 15:45:15.279894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.325 [2024-12-06 15:45:15.279901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.325 [2024-12-06 15:45:15.292022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.325 [2024-12-06 15:45:15.292418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.325 [2024-12-06 15:45:15.292436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.325 [2024-12-06 15:45:15.292443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.325 [2024-12-06 15:45:15.292602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.325 [2024-12-06 15:45:15.292763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.325 [2024-12-06 15:45:15.292772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.325 [2024-12-06 15:45:15.292778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.325 [2024-12-06 15:45:15.292784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.325 [2024-12-06 15:45:15.304763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.325 [2024-12-06 15:45:15.305110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.325 [2024-12-06 15:45:15.305128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.325 [2024-12-06 15:45:15.305135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.325 [2024-12-06 15:45:15.305294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.325 [2024-12-06 15:45:15.305460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.325 [2024-12-06 15:45:15.305469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.325 [2024-12-06 15:45:15.305476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.325 [2024-12-06 15:45:15.305482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.325 [2024-12-06 15:45:15.317801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.584 [2024-12-06 15:45:15.318204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-12-06 15:45:15.318221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-12-06 15:45:15.318229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.584 [2024-12-06 15:45:15.318408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.584 [2024-12-06 15:45:15.318582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.584 [2024-12-06 15:45:15.318592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.584 [2024-12-06 15:45:15.318599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.584 [2024-12-06 15:45:15.318606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.584 [2024-12-06 15:45:15.330616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.584 [2024-12-06 15:45:15.331026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-12-06 15:45:15.331071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-12-06 15:45:15.331095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.584 [2024-12-06 15:45:15.331691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.584 [2024-12-06 15:45:15.332213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.584 [2024-12-06 15:45:15.332223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.584 [2024-12-06 15:45:15.332230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.584 [2024-12-06 15:45:15.332236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.584 [2024-12-06 15:45:15.343372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.584 [2024-12-06 15:45:15.343691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.584 [2024-12-06 15:45:15.343708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.584 [2024-12-06 15:45:15.343719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.584 [2024-12-06 15:45:15.343878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.584 [2024-12-06 15:45:15.344040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.584 [2024-12-06 15:45:15.344049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.584 [2024-12-06 15:45:15.344055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.584 [2024-12-06 15:45:15.344062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.584 [2024-12-06 15:45:15.356195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.584 [2024-12-06 15:45:15.356593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.356610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.356618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.356778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.356937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.356947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.356953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.356959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.368930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.369233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.369249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.369256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.369419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.369580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.369589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.369595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.369601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.381724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.382106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.382123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.382130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.382289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.382459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.382469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.382476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.382482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.394549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.394940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.394957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.394965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.395125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.395285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.395294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.395301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.395307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.407379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.407771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.407787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.407794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.407953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.408112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.408122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.408128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.408134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.420249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.420646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.420664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.420671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.420829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.420989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.420998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.421008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.421015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.433116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.433511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.433528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.433535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.433694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.433853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.433862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.433869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.433876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.445950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.446251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.446267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.446275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.446439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.446599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.446609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.446615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.446621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.458730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.459123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.459140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.459147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.459306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.459472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.459482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.459488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.585 [2024-12-06 15:45:15.459494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.585 [2024-12-06 15:45:15.471463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.585 [2024-12-06 15:45:15.471857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.585 [2024-12-06 15:45:15.471874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.585 [2024-12-06 15:45:15.471881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.585 [2024-12-06 15:45:15.472039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.585 [2024-12-06 15:45:15.472199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.585 [2024-12-06 15:45:15.472209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.585 [2024-12-06 15:45:15.472215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.472222] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.484196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.484460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.484529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.484554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.485136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.485636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.485645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.485652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.485658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.497026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.497440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.497457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.497465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.497634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.497802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.497811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.497818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.497825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.510119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.510503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.510522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.510533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.510708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.510881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.510891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.510898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.510905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.522930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.523335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.523390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.523415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.523847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.524007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.524016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.524023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.524029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.535695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.536005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.536021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.536029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.536188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.536347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.536357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.536363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.536376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.548499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.548895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.548940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.548964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.549545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.549709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.549719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.549725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.549731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.561279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.561689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.561707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.561715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.561889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.562062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.562072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.562079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.562085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.586 [2024-12-06 15:45:15.574118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.586 [2024-12-06 15:45:15.574488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.586 [2024-12-06 15:45:15.574506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.586 [2024-12-06 15:45:15.574514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.586 [2024-12-06 15:45:15.574674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.586 [2024-12-06 15:45:15.574835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.586 [2024-12-06 15:45:15.574844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.586 [2024-12-06 15:45:15.574850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.586 [2024-12-06 15:45:15.574856] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.845 [2024-12-06 15:45:15.587050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.845 [2024-12-06 15:45:15.587443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.845 [2024-12-06 15:45:15.587460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.845 [2024-12-06 15:45:15.587467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.845 [2024-12-06 15:45:15.587627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.845 [2024-12-06 15:45:15.587787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.845 [2024-12-06 15:45:15.587798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.845 [2024-12-06 15:45:15.587808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.845 [2024-12-06 15:45:15.587815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.845 [2024-12-06 15:45:15.599866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.845 [2024-12-06 15:45:15.600264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.845 [2024-12-06 15:45:15.600281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.845 [2024-12-06 15:45:15.600288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.845 [2024-12-06 15:45:15.600453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.845 [2024-12-06 15:45:15.600614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.845 [2024-12-06 15:45:15.600622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.845 [2024-12-06 15:45:15.600629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.845 [2024-12-06 15:45:15.600635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.845 5867.20 IOPS, 22.92 MiB/s [2024-12-06T14:45:15.843Z] [2024-12-06 15:45:15.614050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.845 [2024-12-06 15:45:15.614464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.845 [2024-12-06 15:45:15.614481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.845 [2024-12-06 15:45:15.614490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.845 [2024-12-06 15:45:15.614649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.845 [2024-12-06 15:45:15.614809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.845 [2024-12-06 15:45:15.614818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.845 [2024-12-06 15:45:15.614824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.845 [2024-12-06 15:45:15.614831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.845 [2024-12-06 15:45:15.626785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.845 [2024-12-06 15:45:15.627184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.845 [2024-12-06 15:45:15.627202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.845 [2024-12-06 15:45:15.627209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.845 [2024-12-06 15:45:15.627374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.845 [2024-12-06 15:45:15.627535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.845 [2024-12-06 15:45:15.627544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.845 [2024-12-06 15:45:15.627550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.845 [2024-12-06 15:45:15.627557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.845 [2024-12-06 15:45:15.639616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.640023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.640068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.640092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.640667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.640830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.640839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.640845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.640852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.654442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.654939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.654961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.654973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.655228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.655491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.655504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.655515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.655524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.667468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.667850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.667868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.667877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.668050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.668223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.668233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.668240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.668247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.680326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.680693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.680709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.680720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.680880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.681040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.681049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.681056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.681062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.693184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.693553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.693570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.693578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.693737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.693897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.693906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.693913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.693919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.705998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.706343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.706359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.706379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.706538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.706698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.706707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.706714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.706720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.718836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.719232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.719249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.719256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.719422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.719585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.719594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.719601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.719607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.731709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.732079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.732096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.732104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.732264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.732429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.732439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.732446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.732452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.744558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.744951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.744968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.744976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.745135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.745295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.745305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.745312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.846 [2024-12-06 15:45:15.745318] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.846 [2024-12-06 15:45:15.757428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.846 [2024-12-06 15:45:15.757816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.846 [2024-12-06 15:45:15.757834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.846 [2024-12-06 15:45:15.757842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.846 [2024-12-06 15:45:15.758010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.846 [2024-12-06 15:45:15.758178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.846 [2024-12-06 15:45:15.758188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.846 [2024-12-06 15:45:15.758200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.847 [2024-12-06 15:45:15.758207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.847 [2024-12-06 15:45:15.770497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.847 [2024-12-06 15:45:15.770903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-12-06 15:45:15.770920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-12-06 15:45:15.770928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.847 [2024-12-06 15:45:15.771102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.847 [2024-12-06 15:45:15.771275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.847 [2024-12-06 15:45:15.771285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.847 [2024-12-06 15:45:15.771292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.847 [2024-12-06 15:45:15.771298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.847 [2024-12-06 15:45:15.783325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.847 [2024-12-06 15:45:15.783644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-12-06 15:45:15.783661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-12-06 15:45:15.783670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.847 [2024-12-06 15:45:15.783829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.847 [2024-12-06 15:45:15.783988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.847 [2024-12-06 15:45:15.783998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.847 [2024-12-06 15:45:15.784004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.847 [2024-12-06 15:45:15.784010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.847 [2024-12-06 15:45:15.796137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.847 [2024-12-06 15:45:15.796542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-12-06 15:45:15.796588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-12-06 15:45:15.796613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.847 [2024-12-06 15:45:15.797036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.847 [2024-12-06 15:45:15.797197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.847 [2024-12-06 15:45:15.797207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.847 [2024-12-06 15:45:15.797214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.847 [2024-12-06 15:45:15.797220] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.847 [2024-12-06 15:45:15.808895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.847 [2024-12-06 15:45:15.809303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-12-06 15:45:15.809348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-12-06 15:45:15.809393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.847 [2024-12-06 15:45:15.809978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.847 [2024-12-06 15:45:15.810414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.847 [2024-12-06 15:45:15.810424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.847 [2024-12-06 15:45:15.810430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.847 [2024-12-06 15:45:15.810437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.847 [2024-12-06 15:45:15.821645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.847 [2024-12-06 15:45:15.822040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-12-06 15:45:15.822056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-12-06 15:45:15.822064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.847 [2024-12-06 15:45:15.822222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.847 [2024-12-06 15:45:15.822387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.847 [2024-12-06 15:45:15.822413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.847 [2024-12-06 15:45:15.822420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.847 [2024-12-06 15:45:15.822427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.847 [2024-12-06 15:45:15.834452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.847 [2024-12-06 15:45:15.834855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.847 [2024-12-06 15:45:15.834899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:09.847 [2024-12-06 15:45:15.834924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:09.847 [2024-12-06 15:45:15.835430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:09.847 [2024-12-06 15:45:15.835592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.847 [2024-12-06 15:45:15.835601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.847 [2024-12-06 15:45:15.835607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.847 [2024-12-06 15:45:15.835613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.106 [2024-12-06 15:45:15.847503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.106 [2024-12-06 15:45:15.847816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.106 [2024-12-06 15:45:15.847832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.106 [2024-12-06 15:45:15.847843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.106 [2024-12-06 15:45:15.848002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.106 [2024-12-06 15:45:15.848162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.106 [2024-12-06 15:45:15.848172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.106 [2024-12-06 15:45:15.848178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.106 [2024-12-06 15:45:15.848184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.106 [2024-12-06 15:45:15.860290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.106 [2024-12-06 15:45:15.860660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.106 [2024-12-06 15:45:15.860677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.106 [2024-12-06 15:45:15.860685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.106 [2024-12-06 15:45:15.860844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.106 [2024-12-06 15:45:15.861004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.106 [2024-12-06 15:45:15.861013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.106 [2024-12-06 15:45:15.861019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.106 [2024-12-06 15:45:15.861026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.106 [2024-12-06 15:45:15.873142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.106 [2024-12-06 15:45:15.873449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.106 [2024-12-06 15:45:15.873466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.106 [2024-12-06 15:45:15.873474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.106 [2024-12-06 15:45:15.873634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.106 [2024-12-06 15:45:15.873794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.106 [2024-12-06 15:45:15.873804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.106 [2024-12-06 15:45:15.873810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.106 [2024-12-06 15:45:15.873817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.106 [2024-12-06 15:45:15.886140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.106 [2024-12-06 15:45:15.886540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.106 [2024-12-06 15:45:15.886558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.106 [2024-12-06 15:45:15.886566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.106 [2024-12-06 15:45:15.886735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.106 [2024-12-06 15:45:15.886907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.106 [2024-12-06 15:45:15.886918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.106 [2024-12-06 15:45:15.886925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.106 [2024-12-06 15:45:15.886932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.106 [2024-12-06 15:45:15.898959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.106 [2024-12-06 15:45:15.899360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.106 [2024-12-06 15:45:15.899381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.899406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.899990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.900526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.900535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.900542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.900548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:15.911780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:15.912172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:15.912188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.912195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.912354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.912520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.912531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.912537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.912543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:15.924522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:15.924846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:15.924863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.924870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.925030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.925189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.925199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.925205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.925216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:15.937278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:15.937591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:15.937608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.937615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.937775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.937936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.937945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.937951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.937957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:15.950016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:15.950381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:15.950398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.950407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.950565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.950725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.950735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.950741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.950747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:15.962876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:15.963211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:15.963256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.963281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.963792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.963953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.963963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.963970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.963977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:15.975650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:15.976056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:15.976100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.976125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.976626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.976797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.976807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.976814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.976821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:15.988396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:15.988789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:15.988806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:15.988814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:15.988973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:15.989133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:15.989142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:15.989149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:15.989155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:16.001122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:16.001524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:16.001569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:16.001593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:16.002065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:16.002225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:16.002232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:16.002239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:16.002245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:16.013939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:16.014334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.107 [2024-12-06 15:45:16.014351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.107 [2024-12-06 15:45:16.014358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.107 [2024-12-06 15:45:16.014537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.107 [2024-12-06 15:45:16.014706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.107 [2024-12-06 15:45:16.014716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.107 [2024-12-06 15:45:16.014722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.107 [2024-12-06 15:45:16.014729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.107 [2024-12-06 15:45:16.027022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.107 [2024-12-06 15:45:16.027424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.108 [2024-12-06 15:45:16.027442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.108 [2024-12-06 15:45:16.027450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.108 [2024-12-06 15:45:16.027624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.108 [2024-12-06 15:45:16.027798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.108 [2024-12-06 15:45:16.027808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.108 [2024-12-06 15:45:16.027815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.108 [2024-12-06 15:45:16.027822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.108 [2024-12-06 15:45:16.039949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.108 [2024-12-06 15:45:16.040345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.108 [2024-12-06 15:45:16.040363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.108 [2024-12-06 15:45:16.040377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.108 [2024-12-06 15:45:16.040546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.108 [2024-12-06 15:45:16.040715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.108 [2024-12-06 15:45:16.040726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.108 [2024-12-06 15:45:16.040732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.108 [2024-12-06 15:45:16.040739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.108 [2024-12-06 15:45:16.052895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.108 [2024-12-06 15:45:16.053301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.108 [2024-12-06 15:45:16.053320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.108 [2024-12-06 15:45:16.053328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.108 [2024-12-06 15:45:16.053496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.108 [2024-12-06 15:45:16.053657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.108 [2024-12-06 15:45:16.053670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.108 [2024-12-06 15:45:16.053677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.108 [2024-12-06 15:45:16.053684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.108 [2024-12-06 15:45:16.065646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.108 [2024-12-06 15:45:16.066015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.108 [2024-12-06 15:45:16.066032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.108 [2024-12-06 15:45:16.066040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.108 [2024-12-06 15:45:16.066199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.108 [2024-12-06 15:45:16.066360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.108 [2024-12-06 15:45:16.066376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.108 [2024-12-06 15:45:16.066383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.108 [2024-12-06 15:45:16.066391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.108 [2024-12-06 15:45:16.078520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.108 [2024-12-06 15:45:16.078882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.108 [2024-12-06 15:45:16.078899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.108 [2024-12-06 15:45:16.078906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.108 [2024-12-06 15:45:16.079065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.108 [2024-12-06 15:45:16.079225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.108 [2024-12-06 15:45:16.079235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.108 [2024-12-06 15:45:16.079241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.108 [2024-12-06 15:45:16.079248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.108 [2024-12-06 15:45:16.091285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.108 [2024-12-06 15:45:16.091622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.108 [2024-12-06 15:45:16.091639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.108 [2024-12-06 15:45:16.091647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.108 [2024-12-06 15:45:16.091807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.108 [2024-12-06 15:45:16.091967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.108 [2024-12-06 15:45:16.091977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.108 [2024-12-06 15:45:16.091983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.108 [2024-12-06 15:45:16.091993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.368 [2024-12-06 15:45:16.104256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.368 [2024-12-06 15:45:16.104619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.368 [2024-12-06 15:45:16.104666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.368 [2024-12-06 15:45:16.104690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.368 [2024-12-06 15:45:16.105123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.368 [2024-12-06 15:45:16.105285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.368 [2024-12-06 15:45:16.105294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.368 [2024-12-06 15:45:16.105317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.368 [2024-12-06 15:45:16.105324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.368 [2024-12-06 15:45:16.117063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.368 [2024-12-06 15:45:16.117434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.368 [2024-12-06 15:45:16.117452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.368 [2024-12-06 15:45:16.117460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.368 [2024-12-06 15:45:16.117620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.368 [2024-12-06 15:45:16.117781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.368 [2024-12-06 15:45:16.117790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.368 [2024-12-06 15:45:16.117796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.368 [2024-12-06 15:45:16.117802] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.368 [2024-12-06 15:45:16.129936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.368 [2024-12-06 15:45:16.130259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.368 [2024-12-06 15:45:16.130276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.368 [2024-12-06 15:45:16.130284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.368 [2024-12-06 15:45:16.130449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.368 [2024-12-06 15:45:16.130610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.368 [2024-12-06 15:45:16.130620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.368 [2024-12-06 15:45:16.130626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.368 [2024-12-06 15:45:16.130632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.368 [2024-12-06 15:45:16.142918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.368 [2024-12-06 15:45:16.143314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.368 [2024-12-06 15:45:16.143358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.368 [2024-12-06 15:45:16.143397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.368 [2024-12-06 15:45:16.143980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.368 [2024-12-06 15:45:16.144332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.368 [2024-12-06 15:45:16.144343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.368 [2024-12-06 15:45:16.144349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.368 [2024-12-06 15:45:16.144356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.368 [2024-12-06 15:45:16.155675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.368 [2024-12-06 15:45:16.155951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.368 [2024-12-06 15:45:16.155997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.368 [2024-12-06 15:45:16.156021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.368 [2024-12-06 15:45:16.156618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.368 [2024-12-06 15:45:16.157214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.368 [2024-12-06 15:45:16.157231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.368 [2024-12-06 15:45:16.157245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.368 [2024-12-06 15:45:16.157258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3163706 Killed "${NVMF_APP[@]}" "$@" 00:28:10.368 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:10.368 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:10.368 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.368 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.368 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.368 [2024-12-06 15:45:16.170441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.368 [2024-12-06 15:45:16.170846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.170867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.170878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.171113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.171351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.171363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.171381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.171394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3165308 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3165308 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3165308 ']' 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.369 15:45:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:10.369 [2024-12-06 15:45:16.183555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.183843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.183861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.183870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.184043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.184217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.184227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.184234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.184241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.196623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.196980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.196998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.197006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.197179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.197353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.197364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.197377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.197384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.209639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.209922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.209941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.209952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.210126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.210303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.210313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.210320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.210326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.222718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.223129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.223148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.223156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.223164] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:28:10.369 [2024-12-06 15:45:16.223205] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.369 [2024-12-06 15:45:16.223330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.223510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.223519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.223526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.223533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.235794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.236130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.236148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.236157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.236325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.236498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.236508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.236515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.236523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.248678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.249088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.249106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.249117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.249286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.249459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.249470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.249477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.249485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.261672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.262101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.262119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.262128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.262301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.262483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.262493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.262501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.262507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.274622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.275061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.275079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.275088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.275262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.275440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.369 [2024-12-06 15:45:16.275450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.369 [2024-12-06 15:45:16.275458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.369 [2024-12-06 15:45:16.275465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.369 [2024-12-06 15:45:16.287690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.369 [2024-12-06 15:45:16.288095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.369 [2024-12-06 15:45:16.288112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.369 [2024-12-06 15:45:16.288121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.369 [2024-12-06 15:45:16.288294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.369 [2024-12-06 15:45:16.288477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.370 [2024-12-06 15:45:16.288488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.370 [2024-12-06 15:45:16.288496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.370 [2024-12-06 15:45:16.288503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.370 [2024-12-06 15:45:16.300731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.370 [2024-12-06 15:45:16.301188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.370 [2024-12-06 15:45:16.301206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.370 [2024-12-06 15:45:16.301214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.370 [2024-12-06 15:45:16.301392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.370 [2024-12-06 15:45:16.301568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.370 [2024-12-06 15:45:16.301577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.370 [2024-12-06 15:45:16.301585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.370 [2024-12-06 15:45:16.301592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.370 [2024-12-06 15:45:16.303363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:10.370 [2024-12-06 15:45:16.313717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.370 [2024-12-06 15:45:16.314081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.370 [2024-12-06 15:45:16.314102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.370 [2024-12-06 15:45:16.314112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.370 [2024-12-06 15:45:16.314282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.370 [2024-12-06 15:45:16.314460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.370 [2024-12-06 15:45:16.314471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.370 [2024-12-06 15:45:16.314479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.370 [2024-12-06 15:45:16.314487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.370 [2024-12-06 15:45:16.326661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.370 [2024-12-06 15:45:16.327084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.370 [2024-12-06 15:45:16.327102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.370 [2024-12-06 15:45:16.327110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.370 [2024-12-06 15:45:16.327278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.370 [2024-12-06 15:45:16.327455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.370 [2024-12-06 15:45:16.327470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.370 [2024-12-06 15:45:16.327477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.370 [2024-12-06 15:45:16.327484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.370 [2024-12-06 15:45:16.339589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.370 [2024-12-06 15:45:16.339919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.370 [2024-12-06 15:45:16.339937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.370 [2024-12-06 15:45:16.339945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.370 [2024-12-06 15:45:16.340113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.370 [2024-12-06 15:45:16.340282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.370 [2024-12-06 15:45:16.340292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.370 [2024-12-06 15:45:16.340299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.370 [2024-12-06 15:45:16.340306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.370 [2024-12-06 15:45:16.345101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.370 [2024-12-06 15:45:16.345126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.370 [2024-12-06 15:45:16.345133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.370 [2024-12-06 15:45:16.345139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.370 [2024-12-06 15:45:16.345144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.370 [2024-12-06 15:45:16.346546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.370 [2024-12-06 15:45:16.346660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.370 [2024-12-06 15:45:16.346660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.370 [2024-12-06 15:45:16.352713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.370 [2024-12-06 15:45:16.353108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.370 [2024-12-06 15:45:16.353130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.370 [2024-12-06 15:45:16.353140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.370 [2024-12-06 15:45:16.353318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.370 [2024-12-06 15:45:16.353500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.370 [2024-12-06 15:45:16.353512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.370 [2024-12-06 15:45:16.353521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.370 [2024-12-06 15:45:16.353529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.365776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.366168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.366190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.366210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.366393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.631 [2024-12-06 15:45:16.366570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.631 [2024-12-06 15:45:16.366580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.631 [2024-12-06 15:45:16.366588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.631 [2024-12-06 15:45:16.366597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.378836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.379294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.379315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.379325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.379507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.631 [2024-12-06 15:45:16.379684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.631 [2024-12-06 15:45:16.379695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.631 [2024-12-06 15:45:16.379703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.631 [2024-12-06 15:45:16.379712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.391958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.392362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.392391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.392401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.392578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.631 [2024-12-06 15:45:16.392755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.631 [2024-12-06 15:45:16.392765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.631 [2024-12-06 15:45:16.392773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.631 [2024-12-06 15:45:16.392782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.405033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.405483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.405506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.405516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.405691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.631 [2024-12-06 15:45:16.405874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.631 [2024-12-06 15:45:16.405884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.631 [2024-12-06 15:45:16.405892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.631 [2024-12-06 15:45:16.405900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.418138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.418559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.418589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.418598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.418768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.631 [2024-12-06 15:45:16.418938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.631 [2024-12-06 15:45:16.418948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.631 [2024-12-06 15:45:16.418955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.631 [2024-12-06 15:45:16.418962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.431193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.431636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.431655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.431664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.431837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.631 [2024-12-06 15:45:16.432011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.631 [2024-12-06 15:45:16.432022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.631 [2024-12-06 15:45:16.432031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.631 [2024-12-06 15:45:16.432038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.444268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.444560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.444578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.444586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.444761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.631 [2024-12-06 15:45:16.444934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.631 [2024-12-06 15:45:16.444944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.631 [2024-12-06 15:45:16.444955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.631 [2024-12-06 15:45:16.444963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.631 [2024-12-06 15:45:16.457361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.631 [2024-12-06 15:45:16.457704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.631 [2024-12-06 15:45:16.457722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.631 [2024-12-06 15:45:16.457730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.631 [2024-12-06 15:45:16.457903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.458078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.458088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.458096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.458103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.470502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.470838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.470856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.470864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.471038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.471213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.471222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.471229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.471236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.483621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.484054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.484072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.484080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.484255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.484434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.484444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.484451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.484458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.496689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.497027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.497044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.497053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.497226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.497409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.497420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.497427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.497435] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.509683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.510092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.510110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.510118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.510292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.510470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.510481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.510488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.510495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.522701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.523102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.523120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.523128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.523301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.523485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.523495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.523503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.523510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.535722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.536051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.536069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.536080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.536254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.536434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.536445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.536452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.536459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.548821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.549226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.549243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.549252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.549431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.549605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.549615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.632 [2024-12-06 15:45:16.549621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.632 [2024-12-06 15:45:16.549628] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.632 [2024-12-06 15:45:16.561834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.632 [2024-12-06 15:45:16.562176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.632 [2024-12-06 15:45:16.562193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.632 [2024-12-06 15:45:16.562201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.632 [2024-12-06 15:45:16.562379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.632 [2024-12-06 15:45:16.562553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.632 [2024-12-06 15:45:16.562563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.633 [2024-12-06 15:45:16.562570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.633 [2024-12-06 15:45:16.562576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.633 [2024-12-06 15:45:16.574930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.633 [2024-12-06 15:45:16.575307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.633 [2024-12-06 15:45:16.575325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.633 [2024-12-06 15:45:16.575333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.633 [2024-12-06 15:45:16.575511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.633 [2024-12-06 15:45:16.575689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.633 [2024-12-06 15:45:16.575699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.633 [2024-12-06 15:45:16.575706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.633 [2024-12-06 15:45:16.575713] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.633 [2024-12-06 15:45:16.587917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.633 [2024-12-06 15:45:16.588319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.633 [2024-12-06 15:45:16.588337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.633 [2024-12-06 15:45:16.588344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.633 [2024-12-06 15:45:16.588522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.633 [2024-12-06 15:45:16.588695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.633 [2024-12-06 15:45:16.588705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.633 [2024-12-06 15:45:16.588712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.633 [2024-12-06 15:45:16.588719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.633 [2024-12-06 15:45:16.600932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.633 [2024-12-06 15:45:16.601309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.633 [2024-12-06 15:45:16.601327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.633 [2024-12-06 15:45:16.601336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.633 [2024-12-06 15:45:16.601514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.633 [2024-12-06 15:45:16.601688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.633 [2024-12-06 15:45:16.601698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.633 [2024-12-06 15:45:16.601705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.633 [2024-12-06 15:45:16.601711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.633 [2024-12-06 15:45:16.613939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.633 [2024-12-06 15:45:16.614342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.633 [2024-12-06 15:45:16.614360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.633 [2024-12-06 15:45:16.614373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.633 [2024-12-06 15:45:16.614546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.633 [2024-12-06 15:45:16.614720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.633 [2024-12-06 15:45:16.614730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.633 [2024-12-06 15:45:16.614741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.633 [2024-12-06 15:45:16.614749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.894 4889.33 IOPS, 19.10 MiB/s [2024-12-06T14:45:16.892Z] [2024-12-06 15:45:16.626906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.894 [2024-12-06 15:45:16.627310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.894 [2024-12-06 15:45:16.627328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.894 [2024-12-06 15:45:16.627336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.894 [2024-12-06 15:45:16.627514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.894 [2024-12-06 15:45:16.627688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.894 [2024-12-06 15:45:16.627697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.894 [2024-12-06 15:45:16.627704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.894 [2024-12-06 15:45:16.627711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.894 [2024-12-06 15:45:16.639919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.894 [2024-12-06 15:45:16.640317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.894 [2024-12-06 15:45:16.640334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.894 [2024-12-06 15:45:16.640341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.894 [2024-12-06 15:45:16.640520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.894 [2024-12-06 15:45:16.640695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.894 [2024-12-06 15:45:16.640705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.894 [2024-12-06 15:45:16.640712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.894 [2024-12-06 15:45:16.640719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.894 [2024-12-06 15:45:16.652919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.894 [2024-12-06 15:45:16.653320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.894 [2024-12-06 15:45:16.653338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.894 [2024-12-06 15:45:16.653345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.894 [2024-12-06 15:45:16.653525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.894 [2024-12-06 15:45:16.653699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.894 [2024-12-06 15:45:16.653709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.894 [2024-12-06 15:45:16.653716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.894 [2024-12-06 15:45:16.653723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.894 [2024-12-06 15:45:16.665942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.894 [2024-12-06 15:45:16.666352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.666375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.666384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.666557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.895 [2024-12-06 15:45:16.666733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.895 [2024-12-06 15:45:16.666743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.895 [2024-12-06 15:45:16.666750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.895 [2024-12-06 15:45:16.666756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.895 [2024-12-06 15:45:16.678968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.895 [2024-12-06 15:45:16.679376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.679395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.679403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.679576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.895 [2024-12-06 15:45:16.679749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.895 [2024-12-06 15:45:16.679760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.895 [2024-12-06 15:45:16.679767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.895 [2024-12-06 15:45:16.679773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.895 [2024-12-06 15:45:16.691973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.895 [2024-12-06 15:45:16.692355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.692376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.692385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.692558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.895 [2024-12-06 15:45:16.692731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.895 [2024-12-06 15:45:16.692741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.895 [2024-12-06 15:45:16.692748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.895 [2024-12-06 15:45:16.692755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.895 [2024-12-06 15:45:16.704961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.895 [2024-12-06 15:45:16.705363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.705385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.705397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.705577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.895 [2024-12-06 15:45:16.705753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.895 [2024-12-06 15:45:16.705764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.895 [2024-12-06 15:45:16.705770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.895 [2024-12-06 15:45:16.705777] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.895 [2024-12-06 15:45:16.718003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.895 [2024-12-06 15:45:16.718406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.718425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.718433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.718607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.895 [2024-12-06 15:45:16.718780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.895 [2024-12-06 15:45:16.718790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.895 [2024-12-06 15:45:16.718797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.895 [2024-12-06 15:45:16.718803] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.895 [2024-12-06 15:45:16.731004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.895 [2024-12-06 15:45:16.731325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.731342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.731350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.731528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.895 [2024-12-06 15:45:16.731703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.895 [2024-12-06 15:45:16.731713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.895 [2024-12-06 15:45:16.731720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.895 [2024-12-06 15:45:16.731727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.895 [2024-12-06 15:45:16.744103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.895 [2024-12-06 15:45:16.744511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.744530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.744538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.744711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.895 [2024-12-06 15:45:16.744890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.895 [2024-12-06 15:45:16.744900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.895 [2024-12-06 15:45:16.744907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.895 [2024-12-06 15:45:16.744914] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.895 [2024-12-06 15:45:16.757125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.895 [2024-12-06 15:45:16.757578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.895 [2024-12-06 15:45:16.757596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.895 [2024-12-06 15:45:16.757604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.895 [2024-12-06 15:45:16.757777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.757951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.757960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.757967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.757974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.770174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.770583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.896 [2024-12-06 15:45:16.770601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.896 [2024-12-06 15:45:16.770609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.896 [2024-12-06 15:45:16.770783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.770957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.770967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.770974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.770981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.783183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.783512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.896 [2024-12-06 15:45:16.783530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.896 [2024-12-06 15:45:16.783538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.896 [2024-12-06 15:45:16.783711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.783885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.783895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.783906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.783913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.796271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.796676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.896 [2024-12-06 15:45:16.796694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.896 [2024-12-06 15:45:16.796703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.896 [2024-12-06 15:45:16.796876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.797050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.797059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.797067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.797074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.809301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.809710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.896 [2024-12-06 15:45:16.809728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.896 [2024-12-06 15:45:16.809736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.896 [2024-12-06 15:45:16.809910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.810083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.810093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.810100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.810106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.822329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.822731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.896 [2024-12-06 15:45:16.822749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.896 [2024-12-06 15:45:16.822757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.896 [2024-12-06 15:45:16.822930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.823103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.823113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.823121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.823127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.835325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.835733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.896 [2024-12-06 15:45:16.835750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.896 [2024-12-06 15:45:16.835759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.896 [2024-12-06 15:45:16.835932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.836107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.836117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.836124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.836131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.848335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.848741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.896 [2024-12-06 15:45:16.848759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.896 [2024-12-06 15:45:16.848768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.896 [2024-12-06 15:45:16.848943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.896 [2024-12-06 15:45:16.849116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.896 [2024-12-06 15:45:16.849126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.896 [2024-12-06 15:45:16.849132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.896 [2024-12-06 15:45:16.849139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.896 [2024-12-06 15:45:16.861332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.896 [2024-12-06 15:45:16.861738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.897 [2024-12-06 15:45:16.861756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.897 [2024-12-06 15:45:16.861764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.897 [2024-12-06 15:45:16.861938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.897 [2024-12-06 15:45:16.862115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.897 [2024-12-06 15:45:16.862124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.897 [2024-12-06 15:45:16.862131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.897 [2024-12-06 15:45:16.862138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.897 [2024-12-06 15:45:16.874350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.897 [2024-12-06 15:45:16.874679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.897 [2024-12-06 15:45:16.874697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.897 [2024-12-06 15:45:16.874708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.897 [2024-12-06 15:45:16.874881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.897 [2024-12-06 15:45:16.875055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.897 [2024-12-06 15:45:16.875065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.897 [2024-12-06 15:45:16.875072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.897 [2024-12-06 15:45:16.875081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:10.897 [2024-12-06 15:45:16.887469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:10.897 [2024-12-06 15:45:16.887881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:10.897 [2024-12-06 15:45:16.887899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:10.897 [2024-12-06 15:45:16.887907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:10.897 [2024-12-06 15:45:16.888082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:10.897 [2024-12-06 15:45:16.888256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:10.897 [2024-12-06 15:45:16.888266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:10.897 [2024-12-06 15:45:16.888273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:10.897 [2024-12-06 15:45:16.888281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.158 [2024-12-06 15:45:16.900506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.158 [2024-12-06 15:45:16.900919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.158 [2024-12-06 15:45:16.900937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.158 [2024-12-06 15:45:16.900945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.158 [2024-12-06 15:45:16.901118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.158 [2024-12-06 15:45:16.901291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.158 [2024-12-06 15:45:16.901301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.158 [2024-12-06 15:45:16.901307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.158 [2024-12-06 15:45:16.901315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.158 [2024-12-06 15:45:16.913547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.158 [2024-12-06 15:45:16.913860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.158 [2024-12-06 15:45:16.913888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.158 [2024-12-06 15:45:16.913895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.158 [2024-12-06 15:45:16.914069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.158 [2024-12-06 15:45:16.914246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.158 [2024-12-06 15:45:16.914256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.158 [2024-12-06 15:45:16.914262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.158 [2024-12-06 15:45:16.914269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.158 [2024-12-06 15:45:16.926648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.158 [2024-12-06 15:45:16.927057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.158 [2024-12-06 15:45:16.927074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.158 [2024-12-06 15:45:16.927082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.158 [2024-12-06 15:45:16.927256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.158 [2024-12-06 15:45:16.927435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.158 [2024-12-06 15:45:16.927445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.158 [2024-12-06 15:45:16.927452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.158 [2024-12-06 15:45:16.927460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.158 [2024-12-06 15:45:16.939678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.158 [2024-12-06 15:45:16.940015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.158 [2024-12-06 15:45:16.940033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.158 [2024-12-06 15:45:16.940042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.158 [2024-12-06 15:45:16.940215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.158 [2024-12-06 15:45:16.940393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.158 [2024-12-06 15:45:16.940403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.158 [2024-12-06 15:45:16.940411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.158 [2024-12-06 15:45:16.940418] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.158 [2024-12-06 15:45:16.952794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.158 [2024-12-06 15:45:16.953177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.158 [2024-12-06 15:45:16.953195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.158 [2024-12-06 15:45:16.953203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.158 [2024-12-06 15:45:16.953384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.158 [2024-12-06 15:45:16.953558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.158 [2024-12-06 15:45:16.953570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.158 [2024-12-06 15:45:16.953577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.158 [2024-12-06 15:45:16.953588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.158 [2024-12-06 15:45:16.965806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.158 [2024-12-06 15:45:16.966134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:16.966153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:16.966161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:16.966334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:16.966513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.159 [2024-12-06 15:45:16.966524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.159 [2024-12-06 15:45:16.966532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.159 [2024-12-06 15:45:16.966539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.159 [2024-12-06 15:45:16.978922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.159 [2024-12-06 15:45:16.979216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:16.979234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:16.979242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:16.979421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:16.979595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.159 [2024-12-06 15:45:16.979605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.159 [2024-12-06 15:45:16.979613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.159 [2024-12-06 15:45:16.979620] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.159 [2024-12-06 15:45:16.992022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.159 [2024-12-06 15:45:16.992412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:16.992430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:16.992438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:16.992612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:16.992786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.159 [2024-12-06 15:45:16.992797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.159 [2024-12-06 15:45:16.992805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.159 [2024-12-06 15:45:16.992813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.159 [2024-12-06 15:45:17.005037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.159 [2024-12-06 15:45:17.005362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:17.005385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:17.005394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:17.005566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:17.005741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.159 [2024-12-06 15:45:17.005751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.159 [2024-12-06 15:45:17.005758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.159 [2024-12-06 15:45:17.005765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.159 [2024-12-06 15:45:17.018173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.159 [2024-12-06 15:45:17.018512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:17.018531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:17.018539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:17.018712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:17.018886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.159 [2024-12-06 15:45:17.018897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.159 [2024-12-06 15:45:17.018904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.159 [2024-12-06 15:45:17.018911] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.159 [2024-12-06 15:45:17.031282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.159 [2024-12-06 15:45:17.031687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:17.031706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:17.031714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:17.031888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:17.032063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.159 [2024-12-06 15:45:17.032072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.159 [2024-12-06 15:45:17.032079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.159 [2024-12-06 15:45:17.032085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.159 [2024-12-06 15:45:17.044288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.159 [2024-12-06 15:45:17.044646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:17.044665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:17.044675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:17.044850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:17.045023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.159 [2024-12-06 15:45:17.045033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.159 [2024-12-06 15:45:17.045039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.159 [2024-12-06 15:45:17.045046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.159 [2024-12-06 15:45:17.057288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.159 [2024-12-06 15:45:17.057633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.159 [2024-12-06 15:45:17.057652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.159 [2024-12-06 15:45:17.057661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.159 [2024-12-06 15:45:17.057833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.159 [2024-12-06 15:45:17.058007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.160 [2024-12-06 15:45:17.058017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.160 [2024-12-06 15:45:17.058024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.160 [2024-12-06 15:45:17.058031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.160 [2024-12-06 15:45:17.070258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.160 [2024-12-06 15:45:17.070669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.160 [2024-12-06 15:45:17.070687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.160 [2024-12-06 15:45:17.070695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.160 [2024-12-06 15:45:17.070868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.160 [2024-12-06 15:45:17.071042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.160 [2024-12-06 15:45:17.071052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.160 [2024-12-06 15:45:17.071060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.160 [2024-12-06 15:45:17.071066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.160 [2024-12-06 15:45:17.083284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.160 [2024-12-06 15:45:17.083618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.160 [2024-12-06 15:45:17.083638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.160 [2024-12-06 15:45:17.083651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.160 [2024-12-06 15:45:17.083826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.160 [2024-12-06 15:45:17.084000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.160 [2024-12-06 15:45:17.084010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.160 [2024-12-06 15:45:17.084017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.160 [2024-12-06 15:45:17.084024] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.160 [2024-12-06 15:45:17.096416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.160 [2024-12-06 15:45:17.096699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.160 [2024-12-06 15:45:17.096718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.160 [2024-12-06 15:45:17.096726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.160 [2024-12-06 15:45:17.096900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.160 [2024-12-06 15:45:17.097074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.160 [2024-12-06 15:45:17.097084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.160 [2024-12-06 15:45:17.097091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.160 [2024-12-06 15:45:17.097097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.160 [2024-12-06 15:45:17.106247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.160 [2024-12-06 15:45:17.109510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.160 [2024-12-06 15:45:17.109917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.160 [2024-12-06 15:45:17.109934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.160 [2024-12-06 15:45:17.109942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.160 [2024-12-06 15:45:17.110115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.160 [2024-12-06 15:45:17.110289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.160 [2024-12-06 15:45:17.110299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.160 [2024-12-06 15:45:17.110306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.160 [2024-12-06 15:45:17.110312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.160 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.160 [2024-12-06 15:45:17.122542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.160 [2024-12-06 15:45:17.122860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.160 [2024-12-06 15:45:17.122878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.160 [2024-12-06 15:45:17.122886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.160 [2024-12-06 15:45:17.123059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.160 [2024-12-06 15:45:17.123233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.160 [2024-12-06 15:45:17.123242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.160 [2024-12-06 15:45:17.123249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.160 [2024-12-06 15:45:17.123256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.160 [2024-12-06 15:45:17.135631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.160 [2024-12-06 15:45:17.136036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.160 [2024-12-06 15:45:17.136054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.160 [2024-12-06 15:45:17.136062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.160 [2024-12-06 15:45:17.136235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.160 [2024-12-06 15:45:17.136415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.160 [2024-12-06 15:45:17.136425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.161 [2024-12-06 15:45:17.136432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.161 [2024-12-06 15:45:17.136439] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.161 [2024-12-06 15:45:17.148658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.161 [2024-12-06 15:45:17.149022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.161 [2024-12-06 15:45:17.149040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.161 [2024-12-06 15:45:17.149049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.161 [2024-12-06 15:45:17.149223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.161 [2024-12-06 15:45:17.149402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.161 [2024-12-06 15:45:17.149412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.161 [2024-12-06 15:45:17.149419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.161 [2024-12-06 15:45:17.149426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.161 Malloc0 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.420 [2024-12-06 15:45:17.161654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.420 [2024-12-06 15:45:17.162039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.420 [2024-12-06 15:45:17.162057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.420 [2024-12-06 15:45:17.162065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.420 [2024-12-06 15:45:17.162238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.420 [2024-12-06 15:45:17.162416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.420 [2024-12-06 15:45:17.162426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.420 [2024-12-06 15:45:17.162433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.420 [2024-12-06 15:45:17.162441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:11.420 [2024-12-06 15:45:17.174665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.420 [2024-12-06 15:45:17.175052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.420 [2024-12-06 15:45:17.175070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8b120 with addr=10.0.0.2, port=4420 00:28:11.420 [2024-12-06 15:45:17.175078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8b120 is same with the state(6) to be set 00:28:11.420 [2024-12-06 15:45:17.175251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8b120 (9): Bad file descriptor 00:28:11.420 [2024-12-06 15:45:17.175431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.420 [2024-12-06 15:45:17.175441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.420 [2024-12-06 15:45:17.175448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.420 [2024-12-06 15:45:17.175455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.420 [2024-12-06 15:45:17.176420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.420 15:45:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3164340 00:28:11.420 [2024-12-06 15:45:17.187653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.420 [2024-12-06 15:45:17.213214] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:13.058 4849.57 IOPS, 18.94 MiB/s [2024-12-06T14:45:19.994Z] 5697.88 IOPS, 22.26 MiB/s [2024-12-06T14:45:20.932Z] 6343.11 IOPS, 24.78 MiB/s [2024-12-06T14:45:21.870Z] 6845.70 IOPS, 26.74 MiB/s [2024-12-06T14:45:22.807Z] 7267.91 IOPS, 28.39 MiB/s [2024-12-06T14:45:23.741Z] 7624.17 IOPS, 29.78 MiB/s [2024-12-06T14:45:24.676Z] 7921.62 IOPS, 30.94 MiB/s [2024-12-06T14:45:26.062Z] 8182.29 IOPS, 31.96 MiB/s 00:28:20.064 Latency(us) 00:28:20.064 [2024-12-06T14:45:26.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.064 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:20.064 Verification LBA range: start 0x0 length 0x4000 00:28:20.064 Nvme1n1 : 15.00 8383.82 32.75 13140.44 0.00 5927.58 448.61 22094.99 00:28:20.064 [2024-12-06T14:45:26.062Z] =================================================================================================================== 00:28:20.064 [2024-12-06T14:45:26.062Z] Total : 8383.82 32.75 13140.44 0.00 5927.58 448.61 22094.99 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.064 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.065 rmmod nvme_tcp 00:28:20.065 rmmod nvme_fabrics 00:28:20.065 rmmod nvme_keyring 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3165308 ']' 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3165308 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3165308 ']' 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3165308 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3165308 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.065 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.066 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3165308' 00:28:20.066 killing process with pid 3165308 00:28:20.066 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3165308 00:28:20.066 15:45:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3165308 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.328 15:45:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.227 15:45:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.227 00:28:22.227 real 0m26.086s 00:28:22.227 user 1m0.934s 00:28:22.227 sys 0m6.809s 00:28:22.227 15:45:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.227 15:45:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:22.227 ************************************ 00:28:22.227 END TEST nvmf_bdevperf 00:28:22.227 ************************************ 00:28:22.227 15:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:22.227 15:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:22.227 15:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.227 15:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.485 ************************************ 00:28:22.485 START TEST nvmf_target_disconnect 00:28:22.485 ************************************ 00:28:22.485 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:22.485 * Looking for test storage... 00:28:22.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:22.485 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:22.485 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:22.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.486 --rc genhtml_branch_coverage=1 00:28:22.486 --rc genhtml_function_coverage=1 00:28:22.486 --rc genhtml_legend=1 00:28:22.486 --rc geninfo_all_blocks=1 00:28:22.486 --rc geninfo_unexecuted_blocks=1 00:28:22.486 00:28:22.486 ' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:22.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.486 --rc genhtml_branch_coverage=1 00:28:22.486 --rc genhtml_function_coverage=1 00:28:22.486 --rc genhtml_legend=1 00:28:22.486 --rc geninfo_all_blocks=1 00:28:22.486 --rc geninfo_unexecuted_blocks=1 00:28:22.486 00:28:22.486 ' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:22.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.486 --rc genhtml_branch_coverage=1 00:28:22.486 --rc genhtml_function_coverage=1 00:28:22.486 --rc genhtml_legend=1 00:28:22.486 --rc geninfo_all_blocks=1 00:28:22.486 --rc geninfo_unexecuted_blocks=1 00:28:22.486 00:28:22.486 ' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:22.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:22.486 --rc genhtml_branch_coverage=1 00:28:22.486 --rc genhtml_function_coverage=1 00:28:22.486 --rc genhtml_legend=1 00:28:22.486 --rc geninfo_all_blocks=1 00:28:22.486 --rc geninfo_unexecuted_blocks=1 00:28:22.486 00:28:22.486 ' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:22.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:22.486 15:45:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:29.057 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:29.057 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.057 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:29.058 Found net devices under 0000:86:00.0: cvl_0_0 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:29.058 Found net devices under 0000:86:00.1: cvl_0_1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:29.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:28:29.058 00:28:29.058 --- 10.0.0.2 ping statistics --- 00:28:29.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.058 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:28:29.058 00:28:29.058 --- 10.0.0.1 ping statistics --- 00:28:29.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.058 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.058 ************************************ 00:28:29.058 START TEST nvmf_target_disconnect_tc1 00:28:29.058 ************************************ 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.058 [2024-12-06 15:45:34.561390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.058 [2024-12-06 15:45:34.561432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228aac0 with addr=10.0.0.2, port=4420 00:28:29.058 [2024-12-06 15:45:34.561453] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:29.058 [2024-12-06 15:45:34.561462] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:29.058 [2024-12-06 15:45:34.561468] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:29.058 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:29.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:29.058 Initializing NVMe Controllers 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:29.058 00:28:29.058 real 0m0.120s 00:28:29.058 user 0m0.051s 00:28:29.058 sys 0m0.068s 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.058 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:29.058 ************************************ 00:28:29.058 END TEST nvmf_target_disconnect_tc1 00:28:29.058 ************************************ 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 ************************************ 00:28:29.059 START TEST nvmf_target_disconnect_tc2 00:28:29.059 ************************************ 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3170447 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3170447 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3170447 ']' 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 [2024-12-06 15:45:34.700637] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:28:29.059 [2024-12-06 15:45:34.700677] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.059 [2024-12-06 15:45:34.776379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.059 [2024-12-06 15:45:34.817398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.059 [2024-12-06 15:45:34.817437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.059 [2024-12-06 15:45:34.817444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.059 [2024-12-06 15:45:34.817450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.059 [2024-12-06 15:45:34.817456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.059 [2024-12-06 15:45:34.818964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:29.059 [2024-12-06 15:45:34.819070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:29.059 [2024-12-06 15:45:34.819174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:29.059 [2024-12-06 15:45:34.819174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 Malloc0 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 [2024-12-06 15:45:34.980910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.059 15:45:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 [2024-12-06 15:45:35.009990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3170571 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:29.059 15:45:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:31.610 15:45:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3170447 00:28:31.610 15:45:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 [2024-12-06 15:45:37.038291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Write completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.610 starting I/O failed 00:28:31.610 [2024-12-06 15:45:37.038499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:31.610 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 [2024-12-06 15:45:37.038696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Write completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 Read completed with error (sct=0, sc=8) 00:28:31.611 starting I/O failed 00:28:31.611 [2024-12-06 15:45:37.038892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:31.611 [2024-12-06 15:45:37.039075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.039100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.039314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.039346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.039488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.039520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.039707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.039740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.039944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.039956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.040097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.040130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.040323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.040356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.040554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.040589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.040838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.040871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.040992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.041004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.041183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.041216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.041337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.041380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.041498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.041530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.041792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.041824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.041956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-06 15:45:37.041988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-06 15:45:37.042178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.042211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.042331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.042364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.042502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.042535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.042653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.042685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.042806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.042839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.042969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.043001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.043105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.043138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.043260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.043437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.043471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.043674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.043708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.043950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.043983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.044966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.044976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.045046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.045058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.045141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.045151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.045251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.045281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.045483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.045517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.045643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.045676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.045793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.045826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.045934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.045946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.046019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.046030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.046094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.046124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.046249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.046282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.046460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.046493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.046633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.046666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.046771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.046804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.046914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.046926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.047007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.047017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.047105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.047136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.047324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.047357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-06 15:45:37.047561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-06 15:45:37.047594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.047776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.047808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.047979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.048012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.048119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.048153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.048295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.048328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.048460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.048511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.048629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.048662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.048847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.048880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.049049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.049081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.049255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.049288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.049403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.049437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.049622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.049655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.049854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.049893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.050083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.050116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.050291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.050325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.050559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.050594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.050714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.050748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.050869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.050902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.051075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.051109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.051279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.051312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.051560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.051595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.051769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.051802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.051995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.052029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.052212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.052246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.052510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.052545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.052800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.052833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.052956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.052990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.053255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.053288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.053411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.053445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.053649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.053682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.053857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.053890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.054177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.054210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.054469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.054502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.054688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.054721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.054933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.054967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.055076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.055108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.055297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.055329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.055490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.055524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-06 15:45:37.055702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-06 15:45:37.055737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.055925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.055957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.056216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.056249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.056396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.056431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.056556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.056590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.056696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.056730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.056916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.056949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.057122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.057156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.057259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.057292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.057568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.057602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.057814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.057847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.058110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.058142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.058331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.058363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.058546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.058581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.058706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.058745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.058942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.058976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.059103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.059136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.059318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.059350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.059540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.059574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.059755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.059789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.059971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.060005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.060183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.060217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.060461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.060494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.060630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.060663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.060848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.060881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.061050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.061083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.061281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.061315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.061622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.061657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.061778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.061813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.061994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.062027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.062214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.062248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.062489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.062523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.062716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.062748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.062892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.062925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.063107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.063140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.063274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.063307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-06 15:45:37.063520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-06 15:45:37.063553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.063765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.063798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.063970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.064003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.064244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.064277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.064403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.064437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.064619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.064653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.064824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.064857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.064962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.064995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.065259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.065292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.065485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.065519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.065657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.065690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.065861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.065894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.066077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.066110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.066294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.066327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.066532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.066566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.066760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.066793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.066912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.066946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.067133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.067165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.067344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.067394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.067661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.067694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.067809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.067842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.068023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.068056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.068298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.068331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.068474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.068508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.068699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.068732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.068985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.069018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.069202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.069234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.069519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.069554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.069730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.069763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.070054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.070087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.070356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.070398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.070572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.070605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.070802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.070836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-06 15:45:37.071035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-06 15:45:37.071069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.071273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.071305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.071447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.071481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.071668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.071702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.071958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.072167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.072200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.072395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.072432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.072555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.072587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.072783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.072817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.072993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.073026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.073201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.073235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.073448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.073483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.073755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.073788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.073962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.073994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.074213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.074246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.074422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.074456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.074734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.074766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.075006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.075039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.075170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.075203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.075380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.075415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.075683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.075716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.075917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.075950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.076126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.076158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.076381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.076416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.076546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.076580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.076755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.076792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.077055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.077090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.077282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.077315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.077519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.077554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.077800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.077833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.078005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.078038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.078292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.078325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.078508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.078543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.078735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.078769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.079034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.079067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.079254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.079287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.079535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.079570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-06 15:45:37.079758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-06 15:45:37.079790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.080056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.080090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.080272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.080305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.080493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.080526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.080770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.080802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.081000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.081032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.081226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.081259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.081442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.081476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.081653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.081686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.081825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.081857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.082037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.082070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.082328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.082361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.082549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.082582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.082760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.082792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.082961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.082993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.083188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.083221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.083509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.083544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.083659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.083691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.083877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.083910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.084039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.084071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.084193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.084226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.084332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.084365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.084507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.084539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.084728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.084761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.084972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.085005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.085293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.085327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.085470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.085505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.085695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.085728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.085900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.085940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.086108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.086140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.086276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.086310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.086506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.086541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.086829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.087009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.087041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.087310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.087343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.087572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.087606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.087722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.087755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.087954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.087987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-06 15:45:37.088157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-06 15:45:37.088191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.088310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.088343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.088546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.088579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.088842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.088875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.089063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.089096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.089389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.089424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.089559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.089591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.089857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.089891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.090061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.090094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.090330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.090363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.090652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.090686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.090865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.090897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.091164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.091198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.091311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.091343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.091543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.091577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.091786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.091818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.092085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.092118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.092310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.092343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.092561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.092595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.092796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.092829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.093091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.093124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.093225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.093257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.093427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.093462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.093640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.093673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.093855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.093889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.094082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.094115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.094355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.094397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.094601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.094633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.094762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.094795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.094969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.095002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.095177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.095215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.095503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.095537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.095794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.095827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.096029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.096063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.096201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.096235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.096376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.096411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-06 15:45:37.096586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-06 15:45:37.096619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.096814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.096847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.096953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.096987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.097104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.097136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.097402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.097437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.097727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.097761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.098013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.098046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.098358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.098398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.098533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.098566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.098742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.098775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.099042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.099075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.099338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.099378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.099647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.099680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.099937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.099970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.100140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.100173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.100359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.100399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.100660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.100692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.100817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.100849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.101033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.101066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.101248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.101281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.101518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.101552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.101859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.101936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.102136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.102172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.102388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.102425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.102633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.102667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.102907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.102940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.103117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.103150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.103416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.103451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.103586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.103620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.103730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.103764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.104004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.104038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.104213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.104246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.104508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.104542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-06 15:45:37.104715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-06 15:45:37.104749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.104882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.104915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.105096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.105130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.105260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.105294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.105559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.105593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.105863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.105896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.106148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.106182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.106374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.106408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.106601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.106634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.106882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.106915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.107101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.107135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.107262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.107296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.107538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.107573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.107833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.107867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.108081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.108114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.108382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.108416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.108596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.108629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.108892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.108925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.109066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.109099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.109380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.109415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.109598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.109631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.109848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.109881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.109993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.110024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.110211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.110245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.110376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.110410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.110601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.110635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.110896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.110929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.111190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.111223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.111341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.111384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.111651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.111685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.111866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.111899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.112024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.112056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.112170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.112204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.112488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.112523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.112714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.112747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.112855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.112888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.113001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.113034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.113207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.113240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-06 15:45:37.113454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-06 15:45:37.113488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.113682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.113716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.113958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.113992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.114183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.114215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.114398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.114439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.114567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.114601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.114788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.114821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.115015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.115049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.115182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.115216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.115347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.115386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.115666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.115700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.115885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.115918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.116161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.116194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.116389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.116424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.116677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.116712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.116827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.116861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.117052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.117086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.117285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.117319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.117530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.117566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.117672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.117705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.117881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.117913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.118048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.118082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.118258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.118292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.118548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.118582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.118797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.118830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.119007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.119040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.119161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.119195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.119365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.119407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.119582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.119616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.119903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.119936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.120148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.120181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.120434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.120474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.120717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.120751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.120876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.120909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.121097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.121131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-06 15:45:37.121318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-06 15:45:37.121351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.121571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.121606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.121790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.121823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.122088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.122121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.122237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.122271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.122447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.122481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.122656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.122689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.122817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.122850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.123126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.123159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.123405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.123439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.123653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.123687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.123934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.123967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.124093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.124126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.124315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.124347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.124530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.124563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.124804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.124837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.125086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.125118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.125383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.125418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.125609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.125643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.125817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.125850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.126042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.126076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.126268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.126302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.126540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.126575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.126748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.126781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.127004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.127039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.127294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.127326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.127587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.127622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.127795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.127828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.128072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.128106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.128347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.128389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.128575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.128609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.128819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.128853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.128979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.129012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-06 15:45:37.129181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-06 15:45:37.129214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.129501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.129536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.129645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.129678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.129919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.129953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.130153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.130187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.130361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.130412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.130604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.130638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.130829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.130862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.130972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.131005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.131245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.131278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.131399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.131435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.131625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.131658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.131844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.131877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.132009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.132042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.132256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.132492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.132527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.132703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.132738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.132940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.132974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.133191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.133224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.133487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.133521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.133734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.133767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.133953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.133986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.134228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.134261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.134501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.134536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.134735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.134768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.135004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.135037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.135223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.135257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.135518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.135552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.135796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.135829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.136016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.136049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.136312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.136345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.136547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.136587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.136793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.136826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.137030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.137063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.137255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.137287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.137523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.137557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.137757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.137791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.138051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.138084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.138264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-06 15:45:37.138297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-06 15:45:37.138557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.138592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.138791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.138825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.139032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.139065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.139354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.139398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.139645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.139679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.139926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.139960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.140163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.140196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.140439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.140472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.140606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.140640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.140885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.140920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.141190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.141223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.141354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.141393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.141526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.141559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.141762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.141794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.141967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.142001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.142183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.142216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.142321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.142354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.142618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.142651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.142835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.142868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.142981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.143020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.143215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.143248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.143429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.143463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.143632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.143665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.143885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.143918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.144160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.144193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.144380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.144414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.144617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.144651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.144775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.144807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.144997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.145031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.145218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.145253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.145437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.145471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.145663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.145696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.145826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.145859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.146062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.146095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-06 15:45:37.146285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-06 15:45:37.146318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.146523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.146558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.146745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.146778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.146965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.146998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.147184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.147217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.147396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.147430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.147558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.147591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.147842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.147875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.148078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.148111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.148386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.148421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.148612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.148645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.148857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.148891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.149028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.149067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.149321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.149354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.149619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.149653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.149890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.149923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.150133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.150166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.150453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.150488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.150684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.150717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.150959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.150992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.151178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.151211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.151401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.151435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.151612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.151645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.151823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.151857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.151979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.152013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.152201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.152234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.152379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.152414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.152575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.152610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.152804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.152837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.153017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.153050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.153169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.153202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.153391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.153426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.153601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.153635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.153849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.153882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.154093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.154126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.154309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.154341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.154496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.154530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.154649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.154682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.154922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-06 15:45:37.154955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-06 15:45:37.155093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.155126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.155347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.155470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.155592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.155625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.155809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.155842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.156028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.156061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.156326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.156359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.156551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.156584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.156688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.156721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.156911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.156943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.157062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.157094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.157291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.157325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.157547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.157581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.157824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.157857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.158121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.158154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.158286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.158320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.158539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.158574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.158759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.158792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.158975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.159008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.159199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.159231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.159404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.159438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.159576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.159609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.159813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.159845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.160022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.160054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.160188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.160221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.160404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.160438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.160618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.160650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.160919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.160952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.161214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.161247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.161507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.161540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.161780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.161813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.162068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.162102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.162340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.162397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.162664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.162698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.162962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.162994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.163200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.163233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.163450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.163484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.163672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.163705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.163900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.163933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-06 15:45:37.164183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-06 15:45:37.164216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.164456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.164489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.164604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.164639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.164817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.164857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.165075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.165107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.165212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.165245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.165509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.165543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.165806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.165839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.166016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.166050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.166180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.166214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.166390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.166423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.166610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.166643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.166817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.166850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.167044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.167077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.167189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.167222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.167443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.167477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.167613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.167646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.167920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.167953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.168081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.168114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.168237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.168269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.168572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.168606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.168802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.168835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.168961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.168994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.169171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.169203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.169469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.169502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.169748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.169780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.169965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.169998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.170227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.170259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.170449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.170483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.170742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.170776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.170889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.170927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.171059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.171091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.171359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.171403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.171575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.171609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.171750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.171783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.171967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.172000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.172188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.172222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.172337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.172378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.172569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-06 15:45:37.172603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-06 15:45:37.172848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.172881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.173009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.173041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.173232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.173266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.173457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.173491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.173730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.173763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.174014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.174049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.174380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.174414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.174546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.174579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.174772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.174805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.175082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.175115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.175305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.175339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.175618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.175652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.175829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.175862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.176121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.176154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.176426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.176460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.176750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.176783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.177024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.177057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.177241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.177275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.177452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.177485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.177703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.177737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.177912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.177946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.178119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.178151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.178321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.178354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.178498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.178532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.178805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.178838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.178965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.178998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.179267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.179300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.179587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.179621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.179866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.179899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.180086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.180119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.180397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.180431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.180603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.180636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.180776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.180810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.181074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.181107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.181298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.181332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.181469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.181503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.181753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.181786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.181973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.182006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-06 15:45:37.182266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-06 15:45:37.182298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.182429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.182464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.182567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.182600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.182842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.182875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.183006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.183040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.183294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.183326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.183539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.183575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.183788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.183821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.183948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.183982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.184126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.184159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.184366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.184414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.184655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.184688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.184872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.184905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.185088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.185121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.185305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.185339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.185609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.185643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.185828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.185860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.186044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.186077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.186333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.186379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.186562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.186595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.186735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.186768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.186981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.187020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.187121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.187153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.187359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.187404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.187534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.187567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.187697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.187731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.187843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.187876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.188118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.188151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.188275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.188309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.188524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.188558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.188745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.188779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.188901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.188934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.189055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.189088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.189293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.189327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.189457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-06 15:45:37.189491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-06 15:45:37.189762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.189797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.190067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.190101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.190305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.190338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.190558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.190593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.190865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.190899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.191084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.191117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.191223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.191256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.191428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.191462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.191590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.191623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.191802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.191834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.191950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.191983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.192247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.192280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.192401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.192435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.192606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.192647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.192888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.192921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.193102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.193136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.193261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.193295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.193531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.193566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.193750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.193784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.193977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.194009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.194142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.194175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.194363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.194406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.194592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.194626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.194860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.194893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.195015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.195048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.195154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.195187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.195430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.195464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.195651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.195685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.195894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.195927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.196056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.196089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.196305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.196340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.196517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.196550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.196816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.196850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.197049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.197083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.197325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.197358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.197573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.197606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.197723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.197757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.197945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.197978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-06 15:45:37.198219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-06 15:45:37.198252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.198550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.198584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.198771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.198810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.199019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.199052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.199166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.199199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.199405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.199440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.199637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.199670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.199782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.199816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.200090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.200124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.200307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.200339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.200525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.200559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.200751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.200785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.201023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.201056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.201231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.201264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.201448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.201483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.201730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.201763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.201952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.201985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.202106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.202139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.202255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.202289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.202420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.202454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.202629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.202662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.202878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.202911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.203106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.203139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.203267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.203300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.203509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.203543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.203683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.203716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.203982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.204014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.204201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.204234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.204419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.204453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.204671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.204704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.205022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.205055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.205240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.205273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.205390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.205424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.205615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.205649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.205827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.205861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.206051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.206083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.206251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.206285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.206410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.206445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.206701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-06 15:45:37.206734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-06 15:45:37.206838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.206872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.207000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.207032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.207289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.207322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.207581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.207615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.207835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.207869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.208115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.208148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.208336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.208376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.208551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.208584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.208788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.208822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.209060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.209093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.209387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.209421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.209562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.209596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.209716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.209748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.209985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.210018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.210156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.210189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.210357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.210418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.210623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.210658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.210850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.210883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.211004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.211037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.211139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.211171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.211441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.211475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.211715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.211748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.211917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.211950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.212192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.212224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.212492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.212525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.212656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.212689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.212807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.212841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.213028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.213061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.213239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.213272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.213534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.213568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.213768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.213802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.213988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.214026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.214165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.214198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.214465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.214499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.214671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.214704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.214903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.214936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.215068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.215102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.215229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.215262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-06 15:45:37.215499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-06 15:45:37.215534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.215811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.215844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.216034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.216067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.216201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.216235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.216356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.216399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.216577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.216610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.216743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.216777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.216993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.217027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.217146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.217178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.217436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.217471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.217733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.217767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.217871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.217904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.218085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.218118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.218312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.218345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.218492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.218527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.218698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.218731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.218844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.218877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.218987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.219021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.219226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.219259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.219439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.219473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.219732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.219777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.220044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.220078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.220258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.220291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.220548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.220583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.220754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.220787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.220975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.221009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.221199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.221232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.221417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.221452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.221643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.221676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.221918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.221951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.222133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.222167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.222418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.222452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.222627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.222661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.222789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.222822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.223061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-06 15:45:37.223096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-06 15:45:37.223347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.223391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.223588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.223621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.223812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.223845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.224054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.224087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.224325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.224359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.224611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.224646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.224849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.224882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.225103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.225136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.225270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.225304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.225517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.225552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.225740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.225774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.225965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.225998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.226258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.226291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.226436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.226470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.226646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.226679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.226855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.226889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.227070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.227103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.227223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.227256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.227519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.227554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.227747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.227780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.227956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.227990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.228256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.228289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.228478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.228512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.228616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.228649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.228908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.228941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.229244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.229276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.229451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.229486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.229620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.229654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.229845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.229878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.230089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.230122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.230290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.230323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.230602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.230637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.230822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.230855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.231042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.231075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.231199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.231233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.231415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.231450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.231584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.231617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.231737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.231768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-06 15:45:37.232006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-06 15:45:37.232040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.232233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.232267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.232445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.232478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.232738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.232771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.233009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.233042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.233239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.233272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.233458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.233492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.233776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.233810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.234009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.234042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.234224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.234257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.234501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.234535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.234718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.234751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.234881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.234914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.235102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.235135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.235338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.235378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.235502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.235541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.235776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.235809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.235912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.235945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.236207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.236240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.236421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.236456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.236627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.236660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.236763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.236795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.236927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.236960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.237160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.237193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.237462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.237497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.237691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.237724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.237978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.238011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.238219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.238252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.238448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.238483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.238610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.238644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.238825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.238858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.239052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.239085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.239298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.239331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.239450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.239485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.239684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.239717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.239954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.239987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.240173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.240205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.240332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.240365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.240506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-06 15:45:37.240540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-06 15:45:37.240785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.240819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.241011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.241044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.241292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.241326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.241575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.241615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.241789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.241823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.242092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.242125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.242407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.242442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.242626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.242659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.242803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.242836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.242976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.243009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.243117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.243150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.243331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.243364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.243550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.243583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.243757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.243790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.243969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.244002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.244138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.244171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.244377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.244411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.244690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.244724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.244962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.244994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.245120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.245153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.245448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.245482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.245616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.245649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.245890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.245923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.246103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.246136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.246320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.246353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.246488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.246521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.246653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.246686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.246935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.246968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.247093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.247126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.247296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.247329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.247474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.247513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.247695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.247728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.247923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.247956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.248141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.248173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.248288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.248321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.248547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.248582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.248795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.248828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.249007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.249041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-06 15:45:37.249236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-06 15:45:37.249269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.249455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.249490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.249667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.249873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.249907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.250164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.250196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.250473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.250507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.250724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.250758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.250946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.250979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.251160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.251193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.251385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.251420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.251634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.251667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.251904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.251937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.252143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.252177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.252290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.252324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.252515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.252550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.252731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.252764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.253002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.253035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.253248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.253281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.253481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.253515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.253754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.253787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.253907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.253942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.254119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.254152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.254350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.254400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.254661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.254694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.254820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.254853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.255056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.255090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.255214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.255246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.255484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.255519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.255726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.255760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.256026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.256059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.256248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.256281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.256510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.256544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.256721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.256754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.256866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.256900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.257028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.257060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.257297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.257330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.257592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.257627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.257750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.257783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.257904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-06 15:45:37.257938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-06 15:45:37.258119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.258153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.258344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.258398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.258591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.258624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.258810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.258843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.259034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.259067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.259248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.259284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.259407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.259443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.259733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.259767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.259958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.259992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.260235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.260269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.260391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.260425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.260602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.260636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.260897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.260931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.261054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.261087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.261327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.261361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.261499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.261533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.261708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.261741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.261927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.261960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.262251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.262286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.262405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.262439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.262612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.262645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.262835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.262875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.263120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.263154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.263335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.263381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.263667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.263700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.263884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.263918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.264155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.264190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.264446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.264481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.264611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.264646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.264859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.264893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.265157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.265189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.265437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.265472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.265597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.265630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.265820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.265854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-06 15:45:37.266041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-06 15:45:37.266075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.266344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.266396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.266584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.266617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.266808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.266840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.266959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.266992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.267117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.267150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.267425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.267459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.267564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.267596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.267717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.267750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.267987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.268021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.268136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.268169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.268306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.268339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.268614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.268649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.268921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.268955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.269145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.269184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.269416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.269451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.269710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.269744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.269846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.269880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.270086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.270119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.270249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.270282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.270477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.270511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.270702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.270735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.270990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.271025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.271160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.271193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.271363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.271405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.271697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.271730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.271932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.271966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.272229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.272262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.272520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.272802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.272835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.273017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.273050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.273312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.273345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.273546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.273581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.273784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.273817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.274009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.274042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.274215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.274249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.274536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.274571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.274773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.274806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.274935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-06 15:45:37.274968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-06 15:45:37.275155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.275188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.275443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.275478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.275590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.275623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.275822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.275855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.276044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.276076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.276315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.276347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.276644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.276678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.276866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.276899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.277103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.277136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.277321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.277353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.277536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.277570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.277702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.277735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.277914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.277948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.278214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.278248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.278537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.278572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.278759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.278792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.278926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.278960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.279153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.279187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.279450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.279484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.279672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.279705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.279890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.279924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.280116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.280149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.280343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.280383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.280571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.280604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.280841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.280875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.281071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.281104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.281296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.281330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.281529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.281564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.281831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.281864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.282000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.282034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.282168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.282201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.282466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.282500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.282605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.282639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.282832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.282865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.283037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.283070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.283391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.283426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.283620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.283653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.283782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.283815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-06 15:45:37.284050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-06 15:45:37.284084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.284187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.284220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.284403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.284438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.284629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.284663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.284945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.284977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.285203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.285241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.285509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.285544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.285740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.285773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.286010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.286043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.286232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.286266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.286442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.286477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.286597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.286630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.286868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.286900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.287141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.287173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.287434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.287469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.287593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.287625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.287741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.287774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.288031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.288064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.288178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.288209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.288491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.288525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.288652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.288683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.288863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.288896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.289133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.289166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.289353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.289395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.289580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.289612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.289854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.289886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.290076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.290109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.290297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.290330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.290521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.290555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.290758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.290800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.291006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.291039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.291227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.291262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.291461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.291505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.291632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.291665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.291847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.291880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.292118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.292151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.292397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.292432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.292550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.292583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.292702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.292738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-06 15:45:37.292929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-06 15:45:37.292965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.293088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.293122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.293388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.293424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.293562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.293596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.293801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.293834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.293964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.293998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.294206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.294240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.294388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.294425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.294607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.294640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.294833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.294867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.295039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.295072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.295288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.295321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.295573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.295609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.295892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.295925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.296054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.296088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.296275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.296309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.296530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.296567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.296696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.296731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.296965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.296998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.297189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.297223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.297417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.297460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.297705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.297738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.297843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.297876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.298061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.298095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.298391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.298426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.298683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.298718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.298851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.298886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.299069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.299104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.299344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.299399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.299588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.299621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.299808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.299841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.299979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.300013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.300124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.300158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.300338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.300379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.300582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-06 15:45:37.300617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-06 15:45:37.300801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.300834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.301070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.301103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.301351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.301397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.301539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.301574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.301680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.301714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.301902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.301936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.302050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.302085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.302260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.302294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.302426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.302460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.302705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.302738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.302928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.302963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.303220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.303253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.303427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.303463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.303717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.303752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.303938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.303972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.304152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.304186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.304319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.304352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.304498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.304532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.304642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.304676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.304944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.304978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.305149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.305182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.305380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.305416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.305593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.305625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.305804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.305838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.305967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.306000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.306115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.306148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.306321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.306360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.306542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.306578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.306811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.306846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.307040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.307078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.307342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.307404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.307578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.307612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.307788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.307822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.308009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.308043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.308221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.308253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.308437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.308479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.308613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.308650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.308776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.308809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-06 15:45:37.308936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-06 15:45:37.308972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.309202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.309237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.309436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.309471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.309647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.309681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.309852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.309885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.310082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.310119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.310380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.310418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.310698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.310732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.311003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.311037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.311290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.311323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.311462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.311502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.311637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.311672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.311792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.311824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.312012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.312045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.312244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.312276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.312446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.312492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.312763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.312797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.312921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.312955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.313195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.313235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.313362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.313599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.313633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.313804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.313836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.313947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.313982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.314219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.314254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.314440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.314474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.314647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.314681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.314870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.314904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.315088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.315122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.315374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.315409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.315609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.315644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.315784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.315819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.316056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.316090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.316205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.316246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.316489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.316523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.316715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.316748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.316938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.316973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.317159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.317192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.317493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.317528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.317654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.317689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-06 15:45:37.317886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-06 15:45:37.317921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.318182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.318216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.318434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.318469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.318711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.318751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.319013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.319048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.319251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.319286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.319468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.319504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.319621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.319655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.319842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.319876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.320117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.320151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.320329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.320364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.320669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.320704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.320827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.320861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.320982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.321017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.321125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.321160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.321342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.321384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.321606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.321640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.321908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.321942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.322124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.322159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.322345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.322391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.322590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.322624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.322751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.322785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.322970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.323004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.323136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.323169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.323354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.323408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.323526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.323559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.323680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.323714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.323894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.323928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.324122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.324156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.324286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.324321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.324545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.324582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.324859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.324894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.325138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.325172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.325281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.325314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.325450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.325485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.325683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.325717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.325982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.326014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.326138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.326172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-06 15:45:37.326352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-06 15:45:37.326398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.326527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.326559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.326765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.326801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.326933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.326968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.327101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.327136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.327412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.327448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.327719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.327753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.327961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.327995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.328213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.328247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.328429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.328467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.328660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.328693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.328816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.328851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.328965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.328997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.329261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.329294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.329477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.329512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.329629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.329662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.329952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.329986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.330231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.330265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.330457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.330491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.330667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.330701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.330914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.330950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.331142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.331183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.331392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.331430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.331608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.331641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.331835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.331868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.332122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.332156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.332292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.332331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.332596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.332632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.332814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.332848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.332976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.333010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.333195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.333231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.333422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.333459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.333580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.333615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.333755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.333797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.334001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.334037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.334304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.334340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.334527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.334563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.334696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.334731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-06 15:45:37.334933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-06 15:45:37.334967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.335100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.335135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.335409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.335448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.335641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.335674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.335931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.335966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.336141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.336175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.336363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.336408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.336537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.336570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.336767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.336813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.336956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.337000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.337181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.337214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.337391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.337426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.337693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.337727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.337915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.337949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.338131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.338165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.338421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.338458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.338659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.338693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.338883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.338916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.339103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.339136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.339264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.339298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.339415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.339449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.339629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.339665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.339809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.339849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.340097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.340130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.340263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.340298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.340609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.340644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.340775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.340808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.340998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.341033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.341143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.341176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.341358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.341403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.341623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.341657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.341845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.341878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.342000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.342035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.342234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.342267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.342462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-06 15:45:37.342497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-06 15:45:37.342691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.342724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.342847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.342882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.343089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.343123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.343248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.343281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.343461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.343497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.343703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.343739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.343930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.343965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.344080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.344114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.344285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.344319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.344505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.344541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.344725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.344759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.344946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.344980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.345101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.345134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.345314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.345349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.345541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.345581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.345780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.345813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.346057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.346094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.346280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.346314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.346456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.346491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.346757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.346791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.346989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.347022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.347209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.347243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.347433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.347469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.347655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.347696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.347803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.347837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.348098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.348132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.348320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.348354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.348549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.348583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.348776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.348810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.348976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.349178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.349210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.349393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.349430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.349559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.349592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.349729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.349762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.349972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.350006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.350124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.350157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.350325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.350361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.350554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.350588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-06 15:45:37.350705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-06 15:45:37.350738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.350920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.350953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.351127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.351160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.351274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.351309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.351479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.351514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.351641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.351676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.351936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.351970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.352100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.352135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.352313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.352348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.352602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.352636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.352836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.352869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.352991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.353026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.353240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.353273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.353446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.353480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.353667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.353701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.353844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.353877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.354067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.354101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.354379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.354419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.354557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.354591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.354787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.354821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.354998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.355032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.355287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.355321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.355558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.355594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.355787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.355821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.356036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.356071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.356339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.356380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.356581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.356615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.356824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.356858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.356980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.357014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.357189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.357224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.357488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.357522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.357830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.357863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.358126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.358160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.358344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.358398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.358640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.358674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.358852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.358886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.359059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.359092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.359225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.359258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.359505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-06 15:45:37.359540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-06 15:45:37.359711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.359744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.360011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.360044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.360161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.360195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.360330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.360362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.360525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.360563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.360849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.360888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.361090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.361124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.361378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.361412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.361608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.361642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.361813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.361846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.361987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.362020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.362161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.362195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.362314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.362348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.362542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.362578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.362868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.362903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.363043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.363077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.363321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.363355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.363661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.363696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.363804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.363837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.363988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.364023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.364288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.364324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.364517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.364553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.364736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.364771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.364909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.364946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.365121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.365157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.365345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.365388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.365516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.365550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.365745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.365779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.365964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.365999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.366181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.366215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.366351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.366396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.366516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.366550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.366657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.366696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.366889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.366923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.367102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.367136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.367403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.367440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.367576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.367612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.367741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.367777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-06 15:45:37.367978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-06 15:45:37.368012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.368276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.368309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.368434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.368469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.368747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.368780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.368907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.368941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.369159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.369191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.369394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.369428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.369540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.369573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.369789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.369823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.369950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.369983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.370106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.370140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.370390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.370426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.370602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.370636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.370754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.370792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.370989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.371028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.371215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.371248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.371452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.371487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.371596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.371631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.371829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.371864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.372060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.372092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.372226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.372259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.372467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.372504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.372709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.372743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.372984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.373020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.373208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.373243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.373432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.373469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.373590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.373624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.373807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.373841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.374029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.374067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.374253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.374291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.374559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.374598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.374866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.374901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.375077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.375111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.375294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.375329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.375551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-06 15:45:37.375592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-06 15:45:37.375722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.375760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.376012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.376048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.376170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.376206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.376313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.376346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.376480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.376515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.376804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.376840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.377036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.377070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.377257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.377294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.377440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.377475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.377662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.377699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.377969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.378003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.378124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.378157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.378293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.378328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.378526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.378562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.378844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.378881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.379097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.379139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.379334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.379375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.379563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.379597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.379768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.379803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.379925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.379960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.380136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.380174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.380389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.380427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.380698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.380736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.380923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.380958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.381080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.381115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.381356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.381439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.381637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.381672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.381797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.381840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.382145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.382183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.382392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.382431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.382625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.382659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.382777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.382813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.383073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.383108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.383234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.383267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.383555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.383594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.383869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.383906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.384035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.384068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.384266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.384299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.384554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.384588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.384854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.384888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.385008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-06 15:45:37.385043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-06 15:45:37.385191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.385227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.385473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.385510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.385785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.385820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.385924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.385960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.386198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.386232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.386339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.386400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.386633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.386669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.386865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.386903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.387032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.387066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.387239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.387273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.387557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.387594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.387832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.387866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.388077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.388115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.388304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.388355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.388555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.388590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.388723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.388758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.388936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.388970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.389185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.389219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.389433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.389476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.389663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.389697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.389829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.389866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.390053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.390087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.390211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.390246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.390356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.390402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.390578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.390614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.390828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.390863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.390997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.391033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.391228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.391264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.391390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.391427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.391634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.391667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.391928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.391961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.392147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.392181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.392314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.392348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.392571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.392610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.392855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.392891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.393002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.393035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.393223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.393256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.393386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.393421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.393546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.393578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.393788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.393819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.393954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.393992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-06 15:45:37.394257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-06 15:45:37.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.394443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.394477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.394618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.394647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.394823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.394854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.395056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.395087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.395218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.395248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.395492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.395525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.395718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.395750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.396041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.396071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.396192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.396222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.396339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.396377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.396621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.396651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.396828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.396863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.397055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.397088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.397265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.397296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.397411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.397444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.397689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.397721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.397901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.397933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.398119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.398151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.398325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.398357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.398474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.398505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.398624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.398655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.398928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.398959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.399134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.399165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.399339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.399383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.399504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.399535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.399654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.399685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.399985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.400019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.400262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.400293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.400534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.400570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.400702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.400736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.400864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.400897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.401086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.401120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.401259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.401293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.401528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.401563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.401755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.401788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.401962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.401997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.402128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.402162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.402347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.402400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.402643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.402676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.402946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.402993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.403122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.403155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.654 qpair failed and we were unable to recover it. 00:28:31.654 [2024-12-06 15:45:37.403400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.654 [2024-12-06 15:45:37.403435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.403650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.403684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.403877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.403911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.404084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.404119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.404292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.404326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.404622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.404657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.404899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.404932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.405118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.405151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.405269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.405304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.405416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.405451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.405701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.405736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.405922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.405956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.406225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.406260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.406443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.406476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.406743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.406777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.406912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.406945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.407129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.407163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.407363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.407407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.407529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.407562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.407751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.407784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.408023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.408058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.408245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.408278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.408468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.408504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.408688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.408722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.408898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.408933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.409118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.409158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.409343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.409385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.409574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.409608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.409846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.409879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.410016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.410050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.410254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.410289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.410415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.410451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.410622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.410656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.410893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.410927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.411040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.411075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.411255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.411288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.411412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.411446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.411635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.411669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.411909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.411941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.412060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.412094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.412214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.412248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.412441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.655 [2024-12-06 15:45:37.412477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.655 qpair failed and we were unable to recover it. 00:28:31.655 [2024-12-06 15:45:37.412596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.412631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.412757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.412790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.412967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.413001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.413247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.413281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.413406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.413440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.413634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.413667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.413867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.413901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.414028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.414060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.414180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.414213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.414336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.414393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.414577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.414616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.414738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.414770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.414903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.414936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.415132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.415166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.415278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.415311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.415452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.415486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.415594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.415628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.415837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.415870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.416051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.416083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.416219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.416252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.416519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.416553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.416729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.416762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.416940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.416973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.417147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.417179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.417312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.417347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.417539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.417572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.417674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.417706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.417910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.417945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.418186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.418219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.418412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.418447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.418632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.418666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.418840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.418873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.419153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.419186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.419396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.419431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.419625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.419660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.419853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.419887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.420011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.656 [2024-12-06 15:45:37.420044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.656 qpair failed and we were unable to recover it. 00:28:31.656 [2024-12-06 15:45:37.420178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.420212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.420487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.420523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.420659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.420692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.420895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.420930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.421167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.421202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.421392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.421427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.421639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.421674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.421786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.421821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.422024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.422058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.422297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.422331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.422482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.422519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.422716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.422749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.422872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.422905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.423147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.423182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.423425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.423459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.423585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.423620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.423724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.423756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.423958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.423990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.424173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.424206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.424468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.424505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.424693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.424726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.424847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.424882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.425144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.425177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.425380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.425426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.425531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.425564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.657 [2024-12-06 15:45:37.425741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.657 [2024-12-06 15:45:37.425774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.657 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.425984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.426016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.426143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.426176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.426307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.426340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.426474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.426508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.426691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.426724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.426844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.426877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.426976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.427010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.427147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.427180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.427354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.427396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.427521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.427553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.427744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.427777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.427903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.427936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.428124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.428156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.428331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.428379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.428507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.428541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.428719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.428758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.428885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.428918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.429032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.429063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.429364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.429407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.429530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.429563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.429683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.429716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.429820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.429853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.430030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.430064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.430307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.430342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.430476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.430510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.430685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.430719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.430904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.430937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.431115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.431149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.431327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.658 [2024-12-06 15:45:37.431365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.658 qpair failed and we were unable to recover it. 00:28:31.658 [2024-12-06 15:45:37.431492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.431537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.431735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.431768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.431951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.431987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.432168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.432202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.432328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.432361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.432577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.432611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.432886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.432923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.433056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.433090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.433232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.433265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.433506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.433540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.433663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.433698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.433875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.433908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.434022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.434056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.434182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.434224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.434347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.434396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.434590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.434624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.434881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.434914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.435154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.435188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.435290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.435323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.435587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.435625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.435806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.435840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.435948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.435982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.436246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.436280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.436395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.436430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.436629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.436663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.436851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.436887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.437101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.437134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.437274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.659 [2024-12-06 15:45:37.437308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.659 qpair failed and we were unable to recover it. 00:28:31.659 [2024-12-06 15:45:37.437522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.437557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.437671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.437704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.437892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.437926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.438133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.438166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.438296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.438330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.438575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.438609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.438730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.438765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.438949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.438983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.439178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.439210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.439327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.439362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.439612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.439645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.439883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.439917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.440049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.440083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.440191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.440225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.440466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.440502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.440676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.440709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.440879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.440913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.441034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.441067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.441187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.441223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.441488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.441521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.441694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.441727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.441867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.441900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.442144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.442177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.442289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.442321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.442597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.442630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.442810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.442843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.443089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.443122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.443363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.443404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.443580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.443613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.443816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.443849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.444091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.444124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.660 qpair failed and we were unable to recover it. 00:28:31.660 [2024-12-06 15:45:37.444316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.660 [2024-12-06 15:45:37.444353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.444610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.444645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.444913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.444944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.445122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.445156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.445290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.445323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.445446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.445481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.445707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.445740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.445875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.445909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.446080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.446113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.446407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.446442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.446646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.446678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.446865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.446898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.447006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.447039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.447276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.447310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.447527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.447560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.447750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.447783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.448020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.448053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.448338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.448378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.448497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.448530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.448731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.448764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.448898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.448933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.449183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.449215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.449404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.449445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.449576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.449609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.449714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.449746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.449987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.661 [2024-12-06 15:45:37.450020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.661 qpair failed and we were unable to recover it. 00:28:31.661 [2024-12-06 15:45:37.450152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.450185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.450302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.450334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.450477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.450512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.450647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.450682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.450795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.450828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.451072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.451107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.451296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.451331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.451516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.451552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.451815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.451848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.451969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.452002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.452265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.452299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.452482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.452519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.452714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.452746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.452949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.452981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.453105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.453141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.453318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.453353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.453496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.453529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.453720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.453752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.453955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.453987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.454174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.454208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.454324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.454357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.454485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.454520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.454700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.454732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.454866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.454905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.455036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.455070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.455181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.455215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.662 qpair failed and we were unable to recover it. 00:28:31.662 [2024-12-06 15:45:37.455325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.662 [2024-12-06 15:45:37.455358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.455604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.455638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.455812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.455844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.455988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.456023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.456127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.456161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.456382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.456417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.456534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.456566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.456814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.456847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.457031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.457066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.457308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.457342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.457477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.457512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.457705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.457738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.457923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.457957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.458076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.458111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.458227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.458259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.458448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.458483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.458737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.458771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.458992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.459025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.459194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.459227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.459398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.459430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.459550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.459582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.459698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.459730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.459865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.459900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.460078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.460111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.460290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.460329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.460471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.460506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.460769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.460803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.663 [2024-12-06 15:45:37.461043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.663 [2024-12-06 15:45:37.461077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.663 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.461349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.461391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.461633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.461668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.461904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.461937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.462118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.462152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.462404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.462438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.462710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.462742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.462876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.462908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.463176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.463210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.463416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.463451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.463712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.463744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.463951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.463985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.464194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.464228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.464344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.464385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.464652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.464685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.464928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.464962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.465153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.465186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.465412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.465449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.465589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.465624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.465816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.465852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.465972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.466005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.466204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.466238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.466413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.466447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.466707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.466743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.466917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.466951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.467129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.467164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.467425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.467460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.467739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.467772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.467895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.467928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.468193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.468227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.468418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.468453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.664 [2024-12-06 15:45:37.468707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.664 [2024-12-06 15:45:37.468741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.664 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.469021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.469054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.469237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.469272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.469519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.469554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.469813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.469847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.470139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.470173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.470394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.470428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.470628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.470662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.470908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.470941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.471151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.471186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.471363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.471419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.471660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.471693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.471876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.471910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.472093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.472128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.472305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.472338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.472478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.472513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.472716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.472750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.472939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.472974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.473159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.473194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.473456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.473493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.473774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.473808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.474080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.474115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.474401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.474436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.474701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.474741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.474926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.474960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.475219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.475252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.475495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.475528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.665 [2024-12-06 15:45:37.475739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.665 [2024-12-06 15:45:37.475772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.665 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.476033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.476067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.476228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.476260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.476459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.476494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.476673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.476708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.476827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.476860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.477137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.477170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.477409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.477449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.477636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.477669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.477946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.477978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.478167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.478199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.478491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.478527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.478787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.478820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.479001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.479033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.479274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.479309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.479609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.479645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.479889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.479922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.480181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.480213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.480438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.480472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.480693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.480727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.480925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.480958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.481134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.481168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.481432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.481467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.481614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.481649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.481823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.481857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.482059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.482100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.482293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.482327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.482550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.482586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.482836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.482871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.483155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.483188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.483399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.483440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.483575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.483615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.483863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.483899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.484078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.484112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.484402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.484445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.666 qpair failed and we were unable to recover it. 00:28:31.666 [2024-12-06 15:45:37.484636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.666 [2024-12-06 15:45:37.484670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.484937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.484970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.485218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.485255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.485580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.485859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.485893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.486024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.486057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.486301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.486334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.486631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.486668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.486945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.486981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.487131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.487168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.487293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.487507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.487542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.487804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.487838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.488120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.488158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.488345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.488392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.488574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.488608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.488752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.488786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.488892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.488926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.489058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.489092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.489342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.489389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.489592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.489626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.489876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.489914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.490041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.490074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.490320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.490356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.490636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.490672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.490856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.490889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.491067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.491101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.491336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.491394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.491515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.491549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.491686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.491721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.491915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.491949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.492090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.492123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.492392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.492427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.492556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.667 [2024-12-06 15:45:37.492591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.667 qpair failed and we were unable to recover it. 00:28:31.667 [2024-12-06 15:45:37.492830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.492864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.493080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.493116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.493311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.493350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.493491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.493526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.493665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.493701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.493956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.493991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.494262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.494301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.494438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.494475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.494721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.494756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.494985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.495019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.495260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.495294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.495549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.495583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.495775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.495823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.496090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.496125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.496365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.496411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.496683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.496719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.496911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.496943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.497137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.497179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.497446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.497482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.497608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.497643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.497780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.497813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.498014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.498046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.498289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.498322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.498438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.498470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.498590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.498624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.498936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.498973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.499147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.499184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.499392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.499428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.499558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.499592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.499834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.499867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.499999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.500031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.500286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.500323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.500585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.500621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.500811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.500853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.501131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.501163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.501342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.501385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.501642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.501679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.501859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.501901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.668 qpair failed and we were unable to recover it. 00:28:31.668 [2024-12-06 15:45:37.502168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.668 [2024-12-06 15:45:37.502200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.502415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.502450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.502636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.502668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.502930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.502962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.503227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.503267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.503467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.503504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.503661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.503695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.503804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.503838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.504108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.504141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.504349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.504390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.504681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.504715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.504889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.504929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.505178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.505213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.505356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.505398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.505643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.505676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.505890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.505923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.506138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.506173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.506306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.506341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.506614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.506649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.506832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.506866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.507148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.507182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.507291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.507324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.507545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.507587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.507710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.507743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.507926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.507959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.508189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.508223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.508364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.508409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.508599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.508632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.508761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.508795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.509003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.509036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.509216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.509250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.509379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.509417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.509717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.509999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.510033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.510230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.510266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.510457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.510492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.510675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.510709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.510903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.510937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.669 qpair failed and we were unable to recover it. 00:28:31.669 [2024-12-06 15:45:37.511177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.669 [2024-12-06 15:45:37.511211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.511386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.511421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.511615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.511649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.511843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.511876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.512128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.512167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.512398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.512434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.512549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.512585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.512860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.512894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.513176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.513210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.513414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.513448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.513577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.513611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.513854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.513894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.514086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.514119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.514247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.514281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.514488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.514524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.514737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.514770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.514965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.514998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.515240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.515275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.515488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.515525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.515699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.515732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.515946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.515979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.516246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.516457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.516491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.516684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.516718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.516900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.516933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.517123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.517157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.517423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.517463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.517593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.517629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.517811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.517847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.518070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.518102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.518382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.518419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.518687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.518721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.518952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.518985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.519224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.519267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.519466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.519502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.519698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.519734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.519909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.519943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.520130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.670 [2024-12-06 15:45:37.520166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.670 qpair failed and we were unable to recover it. 00:28:31.670 [2024-12-06 15:45:37.520289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.520326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.520617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.520656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.520834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.520870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.520990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.521024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.521218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.521251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.521519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.521555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.521685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.521719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.521834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.521868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.522056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.522095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.522365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.522411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.522602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.522638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.522771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.522805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.523002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.523036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.523215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.523250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.523505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.523745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.523779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.523962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.523996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.524210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.524244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.524387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.524422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.524603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.524637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.524878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.524913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.525033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.525068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.525258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.525295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.525424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.525462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.525639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.525674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.525793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.525829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.525952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.525985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.526158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.526191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.526330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.526363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.526497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.526532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.526777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.526810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.527004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.527038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.527312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.527346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.527474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.527509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.527683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.527719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.527848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.527882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.528086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.528120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.528266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.528300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.671 qpair failed and we were unable to recover it. 00:28:31.671 [2024-12-06 15:45:37.528515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.671 [2024-12-06 15:45:37.528551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.528665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.528697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.528962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.528995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.529262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.529302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.529409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.529443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.529549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.529583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.529840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.529872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.529995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.530030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.530214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.530248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.530509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.530545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.530721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.530756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.530887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.530920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.531095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.531128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.531267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.531299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.531570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.531603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.531736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.531770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.531897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.531930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.532143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.532178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.532394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.532428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.532674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.532708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.532917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.532950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.533144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.533179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.533311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.533344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.533487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.533521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.533714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.533935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.533970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.534163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.534198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.534445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.534480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.534674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.534709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.534968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.535001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.535180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.535225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.535405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.535441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.535559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.535592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.535841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.672 [2024-12-06 15:45:37.535875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.672 qpair failed and we were unable to recover it. 00:28:31.672 [2024-12-06 15:45:37.536062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.536094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.536339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.536384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.536629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.536664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.536865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.536897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.537088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.537122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.537342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.537383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.537651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.537686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.537899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.537937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.538183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.538219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.538406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.538441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.538580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.538615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.538721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.538754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.538986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.539019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.539206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.539239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.539362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.539408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.539593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.539628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.539748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.539780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.539959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.539993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.540095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.540128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.540305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.540338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.540634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.540919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.540953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.541159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.541192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.541413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.541448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.541717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.541751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.541941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.541975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.542099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.542132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.542250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.542284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.542490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.542527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.542737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.542773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.542902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.542938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.543074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.543108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.543285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.543318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.543596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.543630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.543805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.543841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.543971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.544008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.544206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.544243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.544442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.673 [2024-12-06 15:45:37.544477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.673 qpair failed and we were unable to recover it. 00:28:31.673 [2024-12-06 15:45:37.544597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.544630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.544870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.544904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.545088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.545122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.545322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.545364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.545488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.545522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.545695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.545731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.545929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.545963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.546154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.546188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.546396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.546431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.546607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.546643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.546782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.546820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.547011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.547043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.547227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.547264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.547516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.547551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.547729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.547762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.547943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.547976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.548207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.548244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.548406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.548442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.548645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.548680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.548805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.548838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.549023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.549057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.549248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.549281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.549474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.549510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.549706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.549742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.549917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.549961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.550150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.550184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.550424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.550466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.550646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.550680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.550874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.550909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.551083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.551119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.551457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.551492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.551613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.551647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.551890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.551925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.552112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.552146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.552412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.552449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.552627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.552662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.552851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.552885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.553081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.553115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.674 qpair failed and we were unable to recover it. 00:28:31.674 [2024-12-06 15:45:37.553300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.674 [2024-12-06 15:45:37.553338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.553530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.553566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.553692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.553726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.553964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.554001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.554181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.554215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.554391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.554427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.554625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.554659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.554829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.554863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.554987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.555032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.555229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.555271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.555395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.555429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.555553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.555586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.555776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.555811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.555920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.555956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.556068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.556108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.556304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.556344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.556485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.556520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.556775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.556809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.556945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.556977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.557156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.557188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.557321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.557355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.557489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.557529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.557720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.557758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.557937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.557974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.558099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.558132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.558250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.558284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.558575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.558611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.558813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.558851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.559037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.559072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.559220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.559257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.559387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.559422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.559686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.559720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.560009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.560049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.675 qpair failed and we were unable to recover it. 00:28:31.675 [2024-12-06 15:45:37.560203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.675 [2024-12-06 15:45:37.560242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.560443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.560479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.560653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.560686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.560934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.560968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.561152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.561187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.561459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.561498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.561753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.561789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.562077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.562110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.562387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.562425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.562626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.562670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.562911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.562946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.563234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.563269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.563540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.563575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.563722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.563759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.563979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.564014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.564279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.564314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.564604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.564640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.564849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.564885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.565135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.565172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.565456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.565492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.565777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.565812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.565989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.566024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.566265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.566299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.566596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.566636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.566855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.566890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.567085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.567119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.567387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.567423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.567541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.567576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.567686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.567720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.567867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.567902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.568027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.568065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.568196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.568232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.676 [2024-12-06 15:45:37.568457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.676 [2024-12-06 15:45:37.568509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.676 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.568704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.568737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.568920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.568955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.569090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.569123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.569387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.569423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.569625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.569658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.569871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.569905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.570096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.570130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.570261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.570293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.570509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.570545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.570744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.570777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.570904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.570938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.571213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.571247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.571439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.571475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.571660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.571694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.677 [2024-12-06 15:45:37.571862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.677 [2024-12-06 15:45:37.571895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.677 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.572088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.572122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.572410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.572445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.572560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.572594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.572843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.572876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.573147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.573182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.573383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.573419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.573652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.573684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.573868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-06 15:45:37.573902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-06 15:45:37.574205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.574238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.574490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.574523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.574741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.574775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.574958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.574992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.575169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.575201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.575498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.575532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.575667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.575699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.575827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.575860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.576049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.576083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.576347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.576405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.576621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.576655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.576863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.576896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.577139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.577171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.577409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.577445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.577711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.577744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.577920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.577954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.578245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.578277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.578525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.578558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.578767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.578800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.578992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.579025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.579212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.579246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.579439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.579479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.579744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.579778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.579969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.580001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.580188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.580221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.580487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.580523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.580719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.580751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.581005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.581038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.581169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.581202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.581446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.581481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.581668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.581701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.581994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.582027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.582163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.582194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.582478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.582512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.582696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.582731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.582978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.583010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.583190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.583224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.583514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.583548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-06 15:45:37.583761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-06 15:45:37.583793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.584006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.584039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.584214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.584248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.584443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.584478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.584748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.584779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.584955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.584989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.585173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.585207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.585457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.585491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.585691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.585723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.585849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.585882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.586022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.586062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.586364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.586404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.586599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.586632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.586880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.586913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.587103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.587134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.587306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.587339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.587517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.587706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.587739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.587908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.587941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.588153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.588187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.588399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.588434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.588629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.588662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.588791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.588824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.589023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.589057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.589338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.589399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.589537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.589572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.589814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.589847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.590047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.590080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.590203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.590236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.590485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.590520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.590709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.590742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.590955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.590989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.591228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.591262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.591483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.591518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.591666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.591699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.591986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.592020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.592263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.592296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.592482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-06 15:45:37.592517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-06 15:45:37.592635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.592668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.592910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.592944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.593138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.593171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.593390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.593426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.593691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.593724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.593920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.593954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.594213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.594245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.594435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.594474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.594599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.594632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.594872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.594905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.595109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.595143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.595358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.595401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.595647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.595681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.595933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.595967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.596085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.596119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.596295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.596327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.596577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.596611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.596785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.596819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.597113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.597145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.597338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.597382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.597660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.597693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.597932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.597966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.598234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.598267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.598455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.598488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.598750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.598782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.598957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.598992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.599236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.599269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.599448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.599483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.599585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.599618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.599900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.599932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.600219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.600253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.600519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.600554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.600742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.600776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.600981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.601015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.601146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.601179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.601305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.601637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.601671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.601938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.601972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-06 15:45:37.602175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-06 15:45:37.602208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.602404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.602440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.602614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.602653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.602837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.602871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.603046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.603081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.603258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.603289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.603425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.603460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.603642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.603676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.603811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.603845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.604088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.604120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.604306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.604339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.604525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.604558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.604829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.604863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.605004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.605038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.605159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.605192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.605464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.605498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.605758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.605791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.606063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.606099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.606303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.606338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.606455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.606490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.606678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.606711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.606905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.606939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.607062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.607094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.607289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.607323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.607557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.607591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.607855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.607887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.608169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.608203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.608414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.608450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.608656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.608689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.608798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.608836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.609111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.609144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.609414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.609449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.609709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.609742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.609948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.609983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.610099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.610132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.610251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.610284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.610477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.610511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.610722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.610755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-06 15:45:37.610949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-06 15:45:37.610982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.611157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.611190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.611439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.611475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.611758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.611792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.611923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.611957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.612206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.612240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.612360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.612406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.612678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.612712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.612888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.612922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.613058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.613092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.613358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.613405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.613633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.613667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.613961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.613994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.614258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.614292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.614535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.614569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.614777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.614812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.614948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.614980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.615244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.615278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.615546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.615585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.615719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.615753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.616021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.616055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.616329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.616362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.616675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.616712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.616878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.616914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.617117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.617149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.617414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.617448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.617624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.617659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.617927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.617961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.618203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.618236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.618355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.618419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.618683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.618716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.618973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.619006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.619263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.619299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.619595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-06 15:45:37.619630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-06 15:45:37.619890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.619923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.620145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.620178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.620355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.620401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.620658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.620706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.620962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.620995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.621184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.621217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.621342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.621386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.621677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.621710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.621951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.621984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.622232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.622267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.622410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.622445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.622736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.622770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.622971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.623006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.623250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.623283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.623427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.623462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.623706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.623741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.623982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.624017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.624195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.624229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.624409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.624444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.624637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.624671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.624889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.624923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.625190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.625225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.625504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.625540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.625821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.625855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.626034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.626070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.626252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.626288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.626467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.626502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.626694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.626727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.626907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.626940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.627185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.627220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.627412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.627447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.627659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.627693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.627962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.627997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.628211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.628246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.628490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.628526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.628783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.628816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.629062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.629097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.629342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.629384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-06 15:45:37.629577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-06 15:45:37.629611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.629796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.629830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.630162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.630196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.630489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.630524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.630785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.630820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.631015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.631048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.631279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.631315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.631508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.631544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.631738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.631771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.631958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.631992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.632166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.632200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.632496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.632532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.632801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.632833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.633019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.633053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.633325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.633364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.633647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.633680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.633954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.633987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.634170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.634203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.634392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.634426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.634554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.634588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.634788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.634821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.635063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.635096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.635279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.635313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.635519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.635554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.635850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.635883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.636148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.636182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.636365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.636411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.636592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.636624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.636874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.636909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.637197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.637231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.637415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.637450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.637562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.637595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.637789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.637823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.638076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.638109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.638409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.638445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.638664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.638698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.638836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.638871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.639060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.639094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.639351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-06 15:45:37.639397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-06 15:45:37.639507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.639541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.639813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.639846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.640056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.640118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.640336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.640383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.640579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.640613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.640777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.640812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.641076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.641111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.641400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.641434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.641693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.641726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.641922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.641956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.642138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.642171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.642392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.642428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.642700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.642733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.642931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.642964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.643173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.643206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.643454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.643489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.643632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.643667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.643848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.643881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.644073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.644109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.644302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.644335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.644592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.644626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.644741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.644775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.644907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.644940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.645117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.645152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.645346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.645391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.645642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.645675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.645922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.645955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.646074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.646109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.646294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.646327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.646479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.646516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.646802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.646836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.647030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.647063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.647263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.647297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.647437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.647473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.647692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.647727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.647906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.647940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.648178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-06 15:45:37.648210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-06 15:45:37.648472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.648506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.648685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.648721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.648897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.648932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.649226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.649262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.649438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.649474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.649608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.649644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.649858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.649893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.650080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.650114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.650359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.650404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.650721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.650755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.650935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.650967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.651111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.651146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.651283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.651318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.651506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.651540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.651739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.651775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.651920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.651953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.652137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.652171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.652360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.652407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.652611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.652646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.652822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.652855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.653067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.653102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.653381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.653419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.653667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.653703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.653890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.653924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.654123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.654159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.654342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.654403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.654654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.654690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.654811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.654844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.655087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.655121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.655394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-06 15:45:37.655432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-06 15:45:37.655714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.655750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.655940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.655973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.656156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.656189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.656477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.656517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.656718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.656751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.657043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.657077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.657364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.657409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.657676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.657710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.657895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.657928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.658206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.658239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.658444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.658478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.658663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.658697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.658883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.658916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.659207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.659242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.659385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.659420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.659672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.659706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.659892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.659927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.660161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.660194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.660409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.660445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.660630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.660666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.660933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.660967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.661145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.661180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.661391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.661427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.661679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.661713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.661986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.662019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.662148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.662182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.662473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.662507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.662776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.662810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.663016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.663049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.663245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.663278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.663486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.663527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.663826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.663860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.664063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-06 15:45:37.664096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-06 15:45:37.664382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.664417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.664673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.664706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.665001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.665034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.665216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.665249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.665512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.665547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.665737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.665769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.665967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.666000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.666274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.666307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.666597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.666632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.666892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.666925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.667139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.667172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.667359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.667403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.667655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.667689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.667922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.667955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.668202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.668236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.668489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.668523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.668816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.668849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.669070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.669104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.669289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.669324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.669583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.669619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.669870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.669905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.670114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.670147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.670419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.670454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.670635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.670669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.670943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.670984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.671266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.671300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.671495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.671530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.671802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.671835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.672098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.672132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.672315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.672349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.672651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.672685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.672918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.672952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.673231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.673435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.673471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.673744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.673777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.673952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.673986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.674163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.674197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.674482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.674517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-06 15:45:37.674778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-06 15:45:37.674812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.675098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.675133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.675412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.675447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.675642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.675677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.675948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.675982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.676255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.676289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.676579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.676614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.676887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.676921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.677207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.677241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.677497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.677532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.677724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.677760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.677950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.677985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.678193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.678226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.678461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.678496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.678779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.678814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.679066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.679100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.679221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.679254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.679502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.679536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.679786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.679821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.680006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.680039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.680286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.680320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.680579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.680615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.680866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.680899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.681202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.681236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.681495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.681530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.681752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.681786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.682036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.682070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.682384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.682421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.682677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.682711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.682998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.683032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.683306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.683341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.683626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.683660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.683967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.684002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.684279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.684313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.684623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.684659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.684863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.684898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.685172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.685207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.685409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.685444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.685716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-06 15:45:37.685749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-06 15:45:37.685952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.685986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.686232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.686266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.686466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.686501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.686782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.686816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.687041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.687076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.687347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.687392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.687644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.687677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.687905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.687939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.688202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.688236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.688538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.688781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.688814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.689000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.689034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.689309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.689346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.689613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.689651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.689939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.689992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.690247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.690286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.690602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.690638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.690779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.690813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.691010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.691046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.691190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.691223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.691443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.691479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.691611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.691646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.691847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.691881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.692098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.692133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.692394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.692430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.692628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.692663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.692934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.692968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.693108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.693143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.693380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.693417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.693729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.693764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.693978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.694012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.694285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.694319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.694593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.694630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.694920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.694953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.695252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.695287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.695513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.695550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.695830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.695864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.696007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.696042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-06 15:45:37.696338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-06 15:45:37.696396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.696679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.696715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.696935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.696971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.697254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.697290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.697486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.697527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.697727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.697760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.697967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.698002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.698207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.698241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.698519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.698556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.698808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.699151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.699186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.699339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.699389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.699652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.699687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.699889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.699923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.700058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.700092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.700401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.700438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.700716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.700751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.700875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.700910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.701170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.701205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.701459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.701495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.701645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.701680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.701960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.701995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.702255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.702288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.702499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.702535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.702791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.702826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.703045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.703080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.703303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.703338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.703558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.703594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.703849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.703883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.704104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.704138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.704342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.704401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.704627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.704661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.704857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.704891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.705042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.705079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.705383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-06 15:45:37.705420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-06 15:45:37.705708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.705743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.706019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.706054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.706339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.706383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.706618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.706653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.706853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.706886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.707008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.707042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.707287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.707321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.707530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.707564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.707825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.707859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.708054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.708089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.708347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.708395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.708677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.708713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.708859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.708893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.709081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.709116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.709397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.709432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.709670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.709705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.709868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.709901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.710196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.710235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.710488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.710524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.710665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.710700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.710888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.710923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.711149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.711183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.711388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.711423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.711638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.711677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.711866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.712153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.712187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.712460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.712498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.712644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.712678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.712909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.712945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.713149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.713183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.713496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.713533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.713749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.713783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.714041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.714076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.714263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.714298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.714583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.714620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.714875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.714909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.715120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.715154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.715420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-06 15:45:37.715463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-06 15:45:37.715596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.715630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.715820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.715855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.716107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.716142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.716394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.716429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.716564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.716598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.716856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.716891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.717167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.717201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.717399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.717435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.717668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.717703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.717901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.717935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.718233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.718267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.718547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.718582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.718808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.718842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.719178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.719212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.719452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.719488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.719755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.719790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.719939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.719974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.720227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.720262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.720455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.720490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.720771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.720806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.721075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.721111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.721261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.721295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.721559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.721596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.721807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.721840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.722142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.722176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.722392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.722431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.722715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.722755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.723000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.723035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.723180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.723215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.723491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.723528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.723736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.723771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.723954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.723988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.724271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.724306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.724541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.724577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.724776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.724810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.725014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.725050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.725327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.725363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.725529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.725564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-06 15:45:37.725821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-06 15:45:37.725855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.726144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.726179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.726327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.726361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.726572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.726608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.726863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.726898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.727029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.727066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.727324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.727359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.727656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.727693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.727890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.727927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.728138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.728173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.728361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.728412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.728625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.728659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.728944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.728980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.729234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.729268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.729414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.729451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.729734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.729774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.730017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.730053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.730348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.730395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.730634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.730668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.730967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.731002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.731204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.731239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.731510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.731545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.731761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.731797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.731993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.732028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.732217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.732251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.732514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.732549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.732845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.732879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.733170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.733209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.733437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.733472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.733752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.733789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.734071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.734107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.734324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.734360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.734575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.734611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.734890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.734924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.735203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.735237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.735529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.735564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.735868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.736098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.736133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-06 15:45:37.736322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-06 15:45:37.736355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.736670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.736706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.736960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.736995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.737299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.737334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.737587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.737623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.737931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.737967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.738095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.738129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.738407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.738442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.738726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.738761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.739009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.739044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.739240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.739274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.739480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.739515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.739729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.739762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.739899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.739934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.740161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.740195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.740389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.740425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.740722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.740756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.740982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.741017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.741166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.741200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.741390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.741426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.741697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.741732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.741940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.741975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.742263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.742297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.742487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.742523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.742754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.742788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.743092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.743126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.743414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.743452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.743720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.743755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.743970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.744009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.744146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.744181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.744318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.744352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.744569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.744606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.744892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.744929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.745049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.745082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.745348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.745407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.745564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.745597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.745805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.745839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.746138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.746173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-06 15:45:37.746409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-06 15:45:37.746445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.746641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.746675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.746813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.746848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.747043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.747078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.747273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.747307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.747627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.747662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.747805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.747838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.748142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.748189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.748382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.748417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.748617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.748658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.748860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.748896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.749158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.749192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.749493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.749529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.749814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.749849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.750040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.750076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.750352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.750402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.750599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.750634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.750832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.750868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.751000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.751034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.751242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.751277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.751562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.751598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.751816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.751851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.752111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.752147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.752404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.752439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.752716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.752750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.752952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.752987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.753201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.753236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.753426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.753463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.753686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.753720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.754025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.754058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.754260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.754294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.754571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.754607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.754869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.754903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.755203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-06 15:45:37.755237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-06 15:45:37.755354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.755409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.755610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.755644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.755847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.755881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.756089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.756123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.756263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.756299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.756451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.756486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.756740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.756775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.757038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.757072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.757324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.757358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.757504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.757538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.757794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.757828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.758130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.758164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.758315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.758350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.758624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.758658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.758861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.758895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.759189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.759223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.759354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.759404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.759673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.759708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.760004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.760039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.760337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.760385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.760592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.760627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.760779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.760813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.761079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.761113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.761300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.761334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.761560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.761596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.761741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.761776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.762039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.762073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.762326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.762360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.762601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.762636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.762830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.762864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.763161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.763195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.763486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.763522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.763794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.763829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.764063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.764097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.764282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.764316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.764617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.764653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.764767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.764799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.764953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.764986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-06 15:45:37.765182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-06 15:45:37.765215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.765496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.765532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.765739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.765773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.766065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.766101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.766315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.766654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.766690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.766842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.766877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.767011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.767045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.767299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.767332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.767603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.767639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.767787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.767821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.767963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.767996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.768276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.768310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.768526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.768561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.768689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.768722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.768887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.768922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.769227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.769260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.769399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.769436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.769575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.769608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.769840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.769875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.770153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.770189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.770416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.770451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.770608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.770642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.770897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.770932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.771137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.771172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.771376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.771410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.771689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.771723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.771877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.771911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.772102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.772136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.772388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.772423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.772607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.772648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.772884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.772919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.773220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.773255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.773515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.773550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.773726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.773760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.773946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.773981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.774184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.774217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.774420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-06 15:45:37.774457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-06 15:45:37.774623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.774658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.774808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.774841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.775069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.775104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.775309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.775343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.775576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.775612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.775871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.775905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.776135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.776170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.776382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.776417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.776719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.776753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.776912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.776945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.777152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.777186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.777412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.777449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.777743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.777780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.777934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.777968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.778163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.778197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.778482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.778518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.778796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.778830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.779090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.779125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.779434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.779468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.779677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.779717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.780010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.780046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.780267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.780301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.780472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.780506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.780657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.780691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.780917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.780952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.781167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.781200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.781468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.781504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.781644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.781679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.781909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.781944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.782214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.782250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.782487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.782524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.782780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.782814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.783017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.783051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.783253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.783287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.783489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.783524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.783801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.783836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.783981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.784016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-06 15:45:37.784269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-06 15:45:37.784303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.784530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.784566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.784766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.784801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.785026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.785062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.785319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.785354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.785586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.785622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.785843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.785877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.786077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.786111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.786365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.786419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.786555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.786595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.786786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.786819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.787114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.787149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.787348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.787397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.787607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.787641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.787792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.788123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.788159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.788357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.788407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.788562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.788596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.788738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.788771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.788911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.788944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.789199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.789235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.789500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.789535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.789799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.789833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.790101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.790137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.790354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.790402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.790606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.790641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.790780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.790815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.790963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.791001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.791261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.791296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.791585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.791622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.791765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.791800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.791958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.791994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.792189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.792223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.792455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.792491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.792698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.792731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-06 15:45:37.793062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-06 15:45:37.793098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.793394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.793430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.793589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.793624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.793876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.793911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.794212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.794249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.794518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.794555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.794765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.794798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.795039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.795073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.795357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.795407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.795533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.795565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.795696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.795731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.795872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.795904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.796106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.796142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.796280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.796315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.796613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.796648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.796833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.796872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.797084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.797118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.797323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.797357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.797551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.797586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.797739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.797775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.798060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.798099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.798251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.798287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.798567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.798602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.798810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.798846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.798996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.799030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.799307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.799342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.799477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.799511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.799747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.799784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.799911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.799946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.800164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.800201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.800407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.800442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.800647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.800681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.800937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.800973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.801244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.801279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.801524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.801561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.801695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.801732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.801928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.801963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.802215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.802252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-06 15:45:37.802449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-06 15:45:37.802485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.802628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.802663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.802899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.802935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.803215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.803249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.803496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.803537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.803876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.803910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.804147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.804180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.804470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.804505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.804711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.804747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.804945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.804982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.805294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.805330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.805577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.805614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.805897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.805934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.806209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.806245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.806556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.806592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.806798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.806834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.807053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.807087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.807305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.807343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.807615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.807650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.807795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.807833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.808155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.808190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.808400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.808435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.808645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.808680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.808824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.808858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.809084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.809119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.809400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.809437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.809626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.809661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.809799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.809833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.810095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.810130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.810346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.810398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.810556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.810589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.810782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.810823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.811059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.811093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.811291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.811329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.811635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.811672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.811867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.811904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.812203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.812237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.812429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.812466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.812618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-06 15:45:37.812653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-06 15:45:37.812793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.812828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.813161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.813195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.813418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.813455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.813656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.813690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.813916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.813953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.814066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.814102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.814403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.814443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.814605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.814641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.814783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.814816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.814964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.814998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.815226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.815262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.815448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.815484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.815689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.815724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.815877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.815913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.816125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.816159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.816282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.816319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.816481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.816516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.816730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.816765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.816911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.816945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.817141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.817176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.817404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.817440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.817582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.817619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.817760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.817795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.818097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.818133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.818422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.818459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.818579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.818615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.818762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.818797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.819101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.819136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.819336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.819384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.819582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.819618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.819807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.819843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.820087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.820122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.820344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.820390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.820557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.820592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.820791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.820825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.821126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.821160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.821410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.821447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.821593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.821628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-06 15:45:37.821776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-06 15:45:37.821811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.822089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.822126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.822252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.822286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.822400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.822437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.822691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.822727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.822866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.822900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.823088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.823122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.823387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.823425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.823582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.823616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.823775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.823810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.824015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.824050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.824302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.824339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.824552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.824590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.824743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.824778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.824905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.824940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.825159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.825195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.825402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.825440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.825579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.825615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.825820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.825855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.826081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.826118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.826249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.826283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.826495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.826529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.826687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.826726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.826873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.826909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.827135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.827171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.827389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.827426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.827626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.827661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.827800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.827835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.827979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.828014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.828298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.828333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.828635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.828671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.828896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.828933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.829238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.829275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.829418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.829455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.829611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.829646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.829855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.829889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.830191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.830226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.830413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.830450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-06 15:45:37.830706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-06 15:45:37.830742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.830938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.830974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.831223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.831261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.831508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.831543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.831696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.831731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.831985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.832021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.832327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.832362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.832594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.832631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.832846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.832882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.833087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.833122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.833261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.833568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.833612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.833770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.834016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.834050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.834196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.834228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.834493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.834529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.834734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.834767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.834968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.835001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.835213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.835248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.835482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.835517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.835730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.835764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.836020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.836055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.836259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.836293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.836431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.836466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.836604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.836639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.836839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.836875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.837095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.837130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.837324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.837357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.837584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.837619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.837848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.837884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.838200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.838235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.838477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.838515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.838801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.838838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.839034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.839069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-06 15:45:37.839327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-06 15:45:37.839362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.839581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.839618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.839900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.839936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.840057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.840092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.840294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.840335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.840559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.840596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.840721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.840753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.841056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.841093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.841298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.841333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.841539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.841575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.841730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.841767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.842039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.842073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.842325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.842363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.842598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.842634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.842853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.842888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.843044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.843077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.843349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.843398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.843621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.843656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.843925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.844004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.844291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.844331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.844498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.844535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.844753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.844789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.845016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.845052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.845241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.845277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.845546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.845585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.845859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.845896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.846108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.846145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.846425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.846463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.846690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.846725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.846915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.846950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.847240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.847276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.847522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.847571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.847775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.847812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.848083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.848116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.848306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.848342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.848561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.848596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.848809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.848845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.849156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.849192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.849424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-06 15:45:37.849461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-06 15:45:37.849691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.849725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.849941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.849977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.850261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.850298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.850513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.850551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.850835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.850870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.851137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.851172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.851385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.851421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.851586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.851621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.851879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.851917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.852034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.852071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.852195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.852231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.852508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.852543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.852741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.852776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.853043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.853077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.853376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.853412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.853608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.853642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.853922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.853959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.854097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.854131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.854341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.854390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.854706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.854790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.855050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.855088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.855324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.855361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.855619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.855656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.855894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.855931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.856190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.856224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.856398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.856437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.856722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.856756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.856976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.857013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.857199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.857234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.857418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.857453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.857742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.857776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.857996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.858033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.858170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.858204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.858494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.858533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.858815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.858850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.859089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.859124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.859417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.859455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.859730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-06 15:45:37.859765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-06 15:45:37.859949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.859983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.860197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.860232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.860547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.860582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.860864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.860901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.861123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.861158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.861365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.861410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.861666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.861703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.861906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.861942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.862208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.862257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.862450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.862487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.862682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.862718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.862931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.862967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.863153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.863189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.863482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.863518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.863812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.863847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.864056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.864090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.864281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.864315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.864532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.864567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.864823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.864858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.865043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.865079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.865335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.865381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.865658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.865695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.865970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.866003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.866224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.866260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.866407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.866443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.866705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.866742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.866996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.867031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.867250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.867284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.867546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.867582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.867861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.867895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.868128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.868165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.868468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.868503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.868808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.868846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.869002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.869036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.869222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.869258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.869515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.869557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.869841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.869877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.870147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.870184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-06 15:45:37.870404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-06 15:45:37.870442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.870718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.870755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.871010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.871044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.871251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.871284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.871555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.871591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.871802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.871838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.872042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.872078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.872331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.872365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.872610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.872646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.872902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.872935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.873238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.873273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.873476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.873512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.873723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.873758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.873969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.874004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.874199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.874234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.874489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.874524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.874801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.874836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.875118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.875152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.875284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.875317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.875617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.875654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.875902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.875938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.876220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.876256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.876444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.876481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.876738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.876773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.877034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.877077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.877331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.877364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.877670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.877706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.877960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.877994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.878270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.878305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.878627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.878665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.878993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.879027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.879319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.879354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.879591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.879626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.879911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.879946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.880071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.880107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.880360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.880407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.880624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.880668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.880860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.880896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-06 15:45:37.881183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-06 15:45:37.881217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.881406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.881441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.881741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.881777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.881995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.882029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.882240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.882275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.882556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.882592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.882881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.882917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.883185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.883221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.883358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.883422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.883633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.883668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.883868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.883904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.884158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.884192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.884446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.884482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.884786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.884821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.884964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.885000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.885129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.885164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.885360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.885407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.885669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.885705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.885848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.885881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.886135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.886168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.886423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.886460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.886740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.886774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.886969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.887003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.887308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.887342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.887612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.887651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.887936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.887969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-06 15:45:37.888199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-06 15:45:37.888234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.888382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.888417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.888726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.888763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.889053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.889088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.889306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.889345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.889625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.889661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.889934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.889970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.890260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.890294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.890514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.890553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.890760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.890795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.890931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.890966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.891253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.891287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.891564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.891602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.891887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.891923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.892127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.892160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.892445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.892481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.892764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.892801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.893122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.893157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.893443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.893480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.893684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.893719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.893974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.894010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.894295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.894332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.894627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.894662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.894929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.894963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.895191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.895226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.895407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.895445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.895698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.895733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.895923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.895958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.896245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.896286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.896418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.896455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.896710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.896747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.897026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.897063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.897252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.897287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.897483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.897518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.897777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.897811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.898115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.898148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.898333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.898380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.898505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.898541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-06 15:45:37.898796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-06 15:45:37.898831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.898969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.899003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.899183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.899216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.899440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.899475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.899761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.899797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.900053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.900090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.900313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.900349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.900574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.900608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.900800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.900836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.901040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.901075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.901328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.901362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.901512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.901547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.901750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.901784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.901986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.902020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.902222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.902257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.902455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.902491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.902749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.902785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.903069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.903111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.903239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.903276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.903571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.903608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.903813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.903847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.903978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.904012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.904152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.904187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.904390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.904424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.904683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.904715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.904912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.904947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.905203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.905236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.905431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.905466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.905688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.905722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.905923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.905956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.906084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.906118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.906251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.906283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.906497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.906533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.906667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.906701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.906830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.906864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.906996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.907029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.907327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.907361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.907653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.907689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.907826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.992 [2024-12-06 15:45:37.907859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.992 qpair failed and we were unable to recover it. 00:28:31.992 [2024-12-06 15:45:37.908190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.908228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.908439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.908477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.908667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.908702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.908834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.908869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.909016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.909052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.909309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.909343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.909632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.909667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.909879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.909913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.910105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.910138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.910292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.910325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.910606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.910642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.910947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.910982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.911187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.911223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.911507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.911545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.911772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.911808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.912063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.912098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.912296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.912332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.912533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.912570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.912844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.912878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.913202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.913237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.913435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.913472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.913780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.913813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.914095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.914130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.914321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.914356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.914665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.914701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.914907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.914941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.915073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.915107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.915328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.915363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.915672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.915706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.915957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.915992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.916193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.916228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.916494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.916530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.916759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.916793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.917058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.917094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.917299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.917335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.917605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.917642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.917898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.917931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.993 qpair failed and we were unable to recover it. 00:28:31.993 [2024-12-06 15:45:37.918213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.993 [2024-12-06 15:45:37.918248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.918509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.918547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.918731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.918765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.918950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.918986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.919201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.919235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.919443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.919479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.919606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.919641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.919915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.919951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.920185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.920220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.920492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.920533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.920739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.920774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.920986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.921022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.921232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.921265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.921545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.921586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.921818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.921854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.922127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.922162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.922342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.922393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.922599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.922635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.922825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.922860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.923071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.923107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.923355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.923414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.923699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.923734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.924043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.924082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.924323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.924359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.924612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.924647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.924855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.924891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.925095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.925129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.925331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.925365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.925587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.925623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.925826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.925861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.926139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.926174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.926385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.926421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.994 [2024-12-06 15:45:37.926704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.994 [2024-12-06 15:45:37.926738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.994 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.926973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.927007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.927194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.927227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.927505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.927541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.927741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.927782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.927974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.928008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.928214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.928249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.928453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.928489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.928761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.928796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.929004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.929039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.929292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.929327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.929481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.929516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.929722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.929757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.930037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.930071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.930198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.930232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.930482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.930518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.930773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.930807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.930996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.931031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.931163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.931198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.931405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.931440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.931661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.931695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.931950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.931983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.932264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.932299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:31.995 [2024-12-06 15:45:37.932421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.995 [2024-12-06 15:45:37.932455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:31.995 qpair failed and we were unable to recover it. 00:28:32.272 [2024-12-06 15:45:37.932733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.272 [2024-12-06 15:45:37.932769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.272 qpair failed and we were unable to recover it. 00:28:32.272 [2024-12-06 15:45:37.933034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.272 [2024-12-06 15:45:37.933069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.272 qpair failed and we were unable to recover it. 00:28:32.272 [2024-12-06 15:45:37.933366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.272 [2024-12-06 15:45:37.933426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.272 qpair failed and we were unable to recover it. 00:28:32.272 [2024-12-06 15:45:37.933661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.272 [2024-12-06 15:45:37.933696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.272 qpair failed and we were unable to recover it. 00:28:32.272 [2024-12-06 15:45:37.933970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.272 [2024-12-06 15:45:37.934004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.272 qpair failed and we were unable to recover it. 00:28:32.272 [2024-12-06 15:45:37.934209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.272 [2024-12-06 15:45:37.934243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.272 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.934465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.934500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.934764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.934806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.935055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.935089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.935293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.935330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.935531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.935568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.935776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.935814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.936077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.936111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.936383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.936420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.936576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.936613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.936815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.936852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.937107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.937148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.937277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.937316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.937613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.937649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.937906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.937942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.938159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.938195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.938342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.938388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.938522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.938557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.938779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.938813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.939015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.939049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.939305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.939343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.939578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.939614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.939814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.939846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.940152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.940186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.940391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.940427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.940578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.940613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.940818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.940851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.940993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.941028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.941306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.941341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.941652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.941688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.941922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.941957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.942073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.942109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.942381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.942418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.942623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.942657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.942882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.942917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.943146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.943179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.943341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.943389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.273 qpair failed and we were unable to recover it. 00:28:32.273 [2024-12-06 15:45:37.943572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.273 [2024-12-06 15:45:37.943609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.943826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.943861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.944154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.944190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.944426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.944463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.944737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.944770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.945109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.945144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.945433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.945471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.945699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.945733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.945939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.945973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.946228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.946265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.946477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.946513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.946696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.946731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.946925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.946959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.947257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.947292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.947529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.947566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.947774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.947809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.947965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.947998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.948190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.948225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.948510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.948546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.948820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.948855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.948999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.949034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.949338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.949387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.949594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.949628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.949888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.949923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.950226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.950261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.950505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.950541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.950767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.950801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.951012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.951047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.951239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.951274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.951500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.951536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.951722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.951757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.952052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.952087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.952355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.952404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.952596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.952639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.952793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.952827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.952963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.952999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.953117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.953149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.953431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-06 15:45:37.953468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.274 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-06 15:45:37.953681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.953716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.953949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.953982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.954259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.954294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.954493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.954530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.954725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.954760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.955048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.955081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.955285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.955321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.955517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.955554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.955735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.955769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.956081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.956117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.956380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.956416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.956626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.956660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.956941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.956975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.957116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.957151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.957348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.957409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.957628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.957663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.957857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.957891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.958177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.958211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.958512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.958550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.958765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.958800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.958996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.959031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.959307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.959343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.959607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.959649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.959768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.959803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.960056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.960090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.960242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.960277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.960472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.960508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.960798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.960833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.961061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.961095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.961310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.961345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.961558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.961592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.961895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.961929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.962195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.962230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.962521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.962557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.962712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.962747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.962905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.962939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.963146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.963182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-06 15:45:37.963494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-06 15:45:37.963529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.963748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.963781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.964004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.964038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.964223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.964258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.964527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.964563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.964842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.964877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.965118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.965153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.965387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.965424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.965581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.965616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.965818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.965852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.966114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.966147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.966361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.966412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.966667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.966701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.966935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.966972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.967262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.967412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.967448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.967679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.967715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.967998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.968031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.968294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.968329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.968561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.968596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.968754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.968790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.968940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.968974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.969192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.969227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.969352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.969400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.969656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.969691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.969888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.969923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.970086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.970122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.970323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.970356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.970590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.970626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.970898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.970933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.971212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.971247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.971390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.971427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.971633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.971668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.971889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.971924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.972196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.972229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.972425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-06 15:45:37.972461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-06 15:45:37.972586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.972621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.972804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.972839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.973075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.973110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.973410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.973445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.973711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.973747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.973894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.973928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.974205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.974239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.974397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.974432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.974643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.974678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.974846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.974881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.975070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.975366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.975438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.975649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.975684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.975914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.975948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.976145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.976180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.976360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.976410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.976530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.976565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.976787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.976828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.977038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.977072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.977351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.977399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.977668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.977702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.977932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.977966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.978086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.978121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.978410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.978446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.978649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.978684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.978892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.978925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.979070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.979104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.979309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.979342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.979563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.979599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-06 15:45:37.979879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-06 15:45:37.979912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.980043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.980078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.980395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.980433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.980620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.980655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.980915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.980950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.981254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.981289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.981501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.981536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.981810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.981844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.981983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.982017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.982273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.982307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.982577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.982613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.982869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.982904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.983205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.983239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.983490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.983527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.983829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.983865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.984021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.984063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.984274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.984309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.984607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.984644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.984845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.984879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.985161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.985195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.985472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.985507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.985704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.985738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.986026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.986062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.986280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.986314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.986517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.986553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.986834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.986868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.987182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.987217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.987501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.987537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.987663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.987697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.987909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.987944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.988240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.988274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.988479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.988515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.988726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.988760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.988954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.988988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.989308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-06 15:45:37.989343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-06 15:45:37.989512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.989549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.989756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.989791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.989980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.990013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.990209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.990242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.990484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.990520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.990747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.990782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.991012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.991047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.991331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.991385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.991533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.991568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.991705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.991738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.992036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.992071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.992268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.992302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.992581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.992619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.992767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.992802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.993070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.993105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.993358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.993404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.993662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.993999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.994033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.994237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.994271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.994536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.994571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.994702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.994737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.994947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.994982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.995255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.995288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.995515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.995551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.995761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.995795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.996074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.996108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.996392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.996429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.996710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.996744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.996996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.997031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.997224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.997258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.997489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.997523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.997806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.997841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.997955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.997988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.998261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.998294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.998452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.998488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-06 15:45:37.998685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-06 15:45:37.998719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:37.998930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:37.998964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:37.999151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:37.999185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:37.999418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:37.999452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:37.999611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:37.999644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:37.999851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:37.999886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.000092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.000126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.000407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.000442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.000647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.000682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.000873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.000908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.001200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.001235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.001441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.001477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.001735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.001770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.002004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.002040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.002264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.002299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.002595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.002631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.002835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.002870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.003097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.003131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.003319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.003352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.003651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.003686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.003891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.003926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.004085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.004119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.004326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.004361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.004558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.004594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.004797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.004830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.005116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.005151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.005394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.005431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.005647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.005681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.005814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.005846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.006109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.006145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.006362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.006412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.006606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.006640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.006784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.006818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.007049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.007083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.007365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.007429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.007666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.007699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.007933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.007967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.008187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.008222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.008471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-06 15:45:38.008507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-06 15:45:38.008693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.008728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.008931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.008972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.009098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.009132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.009410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.009447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.009658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.009695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.009949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.009984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.010254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.010290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.010501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.010538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.010794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.010829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.011039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.011074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.011283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.011318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.011527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.011563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.011762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.011797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.012038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.012074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.012391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.012427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.012627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.012664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.012867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.012902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.013126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.013161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.013448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.013485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.013620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.013654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.013855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.013891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.014201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.014235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.014519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.014555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.014703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.014738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.014885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.014920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.015171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.015206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.015464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.015501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.015727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.015762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.015918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.015958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.016277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.016312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.016541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.016577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.016729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.016763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.016965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.017000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.017282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.017318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.017483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.017520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.017724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.017758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.017963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.017997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-06 15:45:38.018249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-06 15:45:38.018284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.018429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.018463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.018718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.018752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.018899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.018932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.019192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.019229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.019422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.019459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.019667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.019702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.019901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.019936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.020216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.020252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.020543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.020579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.020765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.020801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.021079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.021115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.021396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.021432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.021734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.021769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.022031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.022066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.022261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.022296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.022493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.022529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.022738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.022773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.022974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.023009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.023234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.023270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.023576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.023613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.023760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.023795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.024063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.024098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.024281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.024315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.024534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.024570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.024702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.024737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.024902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.024936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.025213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.025250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.025453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.025487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.025637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.025672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.025901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.025936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.026210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.026244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-06 15:45:38.026441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-06 15:45:38.026478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.026677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.026712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.026847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.026883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.027104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.027139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.027394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.027430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.027639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.027674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.027887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.027922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.028133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.028168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.028309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.028343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.028584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.028619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.028766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.028800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.028943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.028979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.029200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.029236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.029494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.029530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.029846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.029881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.030266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.030484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.030521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.030711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.030746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.030871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.030907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.031196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.031231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.031474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.031510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.031666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.031701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.031851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.031886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.032093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.032127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.032242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.032273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.032481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.032519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.032660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.032695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.032880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.032920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.033116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.033150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.033294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.033329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.033544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.033580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.033710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.033744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.034041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.034075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.034354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.034406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.034677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.034712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.034899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.034934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.035231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.035266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.035461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-06 15:45:38.035498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-06 15:45:38.035657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.035692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.035842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.035877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.036023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.036057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.036190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.036225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.036440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.036477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.036683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.036718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.036856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.036893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.037171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.037207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.037511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.037549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.037755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.037790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.037946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.037981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.038176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.038211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.038321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.038354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.038519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.038554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.038782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.038815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.038961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.038991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.039191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.039231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.039457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.039496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.039778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.039813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.040007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.040042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.040285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.040320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.040578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.040614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.040767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.040802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.040988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.041024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.041295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.041331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.041553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.041591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.041784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.041819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.042059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.042094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.042340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.042386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.042593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.042628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.042826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.042861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.043022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.043058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.043343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.043391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.043546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.043581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.043724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.043759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.043918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.043954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.044217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.044252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.044414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.044450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-06 15:45:38.044577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-06 15:45:38.044612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.044803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.044838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.045191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.045226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.045428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.045465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.045722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.045758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.045947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.046195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.046232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.046499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.046538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.046743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.046782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.046931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.046965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.047178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.047214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.047410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.047449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.047708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.047744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.048013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.048049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.048236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.048272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.048406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.048444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.048652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.048687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.048825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.048861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.049138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.049174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.049453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.049490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.049640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.049677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.049835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.049871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.050220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.050257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.050473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.050509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.050791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.050828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.050951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.050987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.051134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.051168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.051382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.051418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.051579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.051618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.051808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.051842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.051989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.052026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.052271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.052308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.052542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.052580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.052800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.052836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.053049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.053084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.053622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.053666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.053826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.053861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.054113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.054149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-06 15:45:38.054462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-06 15:45:38.054501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.054761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.054799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.054961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.054997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.055254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.055291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.055519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.055556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.055814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.055851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.056075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.056112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.056389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.056426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.056586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.056623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.056874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.056910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.057045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.057080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.057296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.057332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.057539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.057577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.057734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.057770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.057992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.058027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.058232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.058267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.058484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.058522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.058716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.058754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.058876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.058909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.059133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.059167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.059357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.059404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.059567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.059602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.059832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.059869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.060006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.060042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.060306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.060342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.060498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.060535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.060789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.060824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.061044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.061080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.061334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.061400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.061549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.061585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.061734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.061771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.061903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.061937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.062080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.062117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.062255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.062289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.062441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.062477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.062615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.062658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.062798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.062835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.063079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.063114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-06 15:45:38.063391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-06 15:45:38.063427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.063586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.063623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.063873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.063909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.064173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.064209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.064509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.064547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.064667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.064702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.064916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.064954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.065248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.065283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.065479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.065517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.065797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.065835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.066175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.066210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.066404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.066440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.066602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.066637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.066847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.066883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.067205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.067242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.067467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.067504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.067642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.067679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.067936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.067973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.068238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.068273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.068514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.068550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.068707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.068741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.068959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.068997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.069127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.069161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.069423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.069461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.069678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.069721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.069926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.069960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.070214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.070249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.070509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.070546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.070686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.070722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.071033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.071070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.071397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.071643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.071677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.071877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.071913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.072129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.072165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-06 15:45:38.072353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-06 15:45:38.072399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.073988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.074057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.074353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.074410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.074607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.074642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.076164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.076224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.076439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.076478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.076754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.076790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.077064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.077099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.077305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.077345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.077521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.077558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.077837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.077872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.078188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.078225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.078444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.078482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.078688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.078726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.078899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.078935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.079073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.079110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.079249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.079283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.079491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.079528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.079727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.079764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.079905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.079939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.080193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.080228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.080387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.080422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.080626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.080664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.080967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.081003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.081270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.081305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.081453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.081492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.081677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.081712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.081939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.081976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.082193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.082229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.082522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.082562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.082718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.082753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.082975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.083012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.083208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.083245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.083458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.083495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.083687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.083724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-06 15:45:38.083954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-06 15:45:38.083991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.084102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.084139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.084427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.084464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.084668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.084706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.084841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.084878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.085009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.085046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.085250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.085287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.085485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.085521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.085684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.085720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.085861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.085895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.086161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.086198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.086314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.086349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.086518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.086554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.086752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.086787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.086897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.086933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.087063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.087099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.087223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.087258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.087404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.087440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.087572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.087607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.087869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.087906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.088050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.088084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.088284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.088319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.088454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.088511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.088647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.088686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.088817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.088852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.088996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.089032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.089147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.089180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.089414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.089453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.089574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.089609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.089747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.089783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.089909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.089944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.090139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.090174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.090310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.090345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.090485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.090523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.090742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.090777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.090975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.091010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.091131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.091165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.091316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.091352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-06 15:45:38.091580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-06 15:45:38.091616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.091808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.091843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.092050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.092085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.092289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.092326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.092537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.092573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.092707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.092744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.092885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.092921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.093055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.093093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.093226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.093264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.093396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.093433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.093556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.093592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.093707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.093741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.094069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.094111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.094418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.094456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.094616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.094652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.094789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.094824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.094958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.094994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.095184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.095220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.095434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.095471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.095729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.095767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.096051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.096086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.096342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.096401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.096710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.096746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.097017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.097051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.097256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.097293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.097474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.097514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.097733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.097768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.098050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.098085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.098362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.098410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.098669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.098705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.098830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.098865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.099071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.099106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.099312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.099350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.099565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.099600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.099752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.099786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.099987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.100024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.100243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.100278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.100440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-06 15:45:38.100475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-06 15:45:38.100614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.100650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.100845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.100886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.101072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.101106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.101318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.101352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.101585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.101621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.101778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.101815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.102008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.102042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.102239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.102276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.102491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.102526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.102732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.102768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.102922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.102959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.103148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.103182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.103390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.103426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.103636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.103672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.103862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.103896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.104107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.104146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.104378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.104415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.104671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.104708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.104914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.104949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.105214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.105251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.105457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.105494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.105644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.105681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.105887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.105925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.106119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.106153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.106363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.106417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.106629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.106666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.106862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.106898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.107093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.107128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.107321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.107358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.107522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.107559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-06 15:45:38.107776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-06 15:45:38.107811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.108100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.108136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.108280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.108316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.108520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.108557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.108820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.108855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.109087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.109122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.109311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.109347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.109490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.109527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.109771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.109806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.109929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.109963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.110175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.110213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.110436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.110475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.110672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.110712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.110914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.110948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.111153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.111187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.111448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.111486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.111692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.111728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.111886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.111920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.113860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.113928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.114219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.114258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.114546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.114584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.114726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.114761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.114910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.114944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.115152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.115189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.115441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.115476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.115726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.115763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.115977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.116012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.116249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.116285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.116500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.116538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.116742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.116776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.116929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.116965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.117222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.117259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.117456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.117494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.117704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.117738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.117945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.117980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.118292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.118327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.118601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-06 15:45:38.118637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-06 15:45:38.118877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.118912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.119237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.119274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.119529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.119572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.119778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.119813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.119950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.119987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.120270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.120304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.120536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.120575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.120719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.120753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.121017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.121051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.121266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.121302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.121456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.121493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.121636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.121673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.121861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.121896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.122149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.122187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.122392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.122570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.122607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.122904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.122938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.123219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.123254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.123454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.123491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.123641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.123676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.123876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.123911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.124219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.124255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.124389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.124425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.124649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.124686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.124827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.124862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.125071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.125109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.125412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.125448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.125672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.125707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.125842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.125876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.126078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.126119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.126304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.126339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.126560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.126597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.126800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.126837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.127001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.127038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.127181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.127215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.127353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.127405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.127545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.127580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.127795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-06 15:45:38.127832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-06 15:45:38.128127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.128162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.128365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.128435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.128588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.128627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.128783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.128821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.129063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.129097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.129300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.129335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.129488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.129526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.129792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.129827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.130044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.130078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.130190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.130226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.130459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.130497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.130706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.130742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.130942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.130979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.131172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.131209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.131422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.131460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.131670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.131705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.131858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.131893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.132072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.132107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.132361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.132412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.132706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.132741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.134315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.134412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.134653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.134689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.134860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.134896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.135125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.135162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.135384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.135423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.135630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.135666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.135889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.135925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.136067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.136103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.136224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.136262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.136420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.136457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.136670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.136708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.137007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.137044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.137305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.137403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.137619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.137663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.137955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.137993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.138200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.138236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.138520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-06 15:45:38.138558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-06 15:45:38.138722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.138760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.138896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.138930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.139130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.139167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.139314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.139349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.139553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.139591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.139740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.139775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.139982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.140018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.140167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.140203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.140418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.140465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.140606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.140643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.140841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.140875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.141083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.141120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.141246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.141281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.141482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.141520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.141662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.141698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.141908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.141943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.142287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.142325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.142612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.142652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.142864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.142900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.143021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.143057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.143290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.143327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.143461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.143499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.143791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.143826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.144033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.144070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.144278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.144313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.144471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.144510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.144637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.144671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.144857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.144892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.145183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.145217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.145492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.145528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.145735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.145770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.145973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.146007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.146203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.146238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.146510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.146548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-06 15:45:38.146747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-06 15:45:38.146783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.147081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.147164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.147500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.147586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.147836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.147875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.148197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.148235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.148450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.148486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.148744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.148780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.148924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.148959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.149139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.149173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.149388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.149424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.149646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.149680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.149812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.149845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.150175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.150210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.150499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.150534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.150720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.150755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.151049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.151084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.151284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.151319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.151604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.151639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.151842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.151875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.152094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.152128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.152397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.152434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.152683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.152716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.152926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.152960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.153114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.153148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.153346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.153388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.153544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.153578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.153718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.153751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.154034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.154070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.154296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.154329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.154562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.154597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.154875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.154910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-06 15:45:38.155056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-06 15:45:38.155090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.155299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.155334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.155547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.155583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.155720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.155755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.155955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.155988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.156262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.156296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.156498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.156534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.156744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.156778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.157034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.157068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.157327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.157361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.157624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.157665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.157870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.157903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.158215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.158249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.158524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.158559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.158740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.159016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.159051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.159188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.159221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.159488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.159524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.159787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.159821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.160067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.160101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.160312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.160347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.160582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.160618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.160814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.160848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.161160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.161194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.161415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.161451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.161674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.161708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.161865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.161898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.162112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.162147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.162346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.162388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.162538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.162574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.162828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.162863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.163129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.163166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.163315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.163349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.163577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.163612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.163831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.163865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.164192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.164226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.164478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.164513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.164720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-06 15:45:38.164756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-06 15:45:38.165036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.165071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.165347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.165392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.165599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.165634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.165819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.165853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.166042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.166075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.166310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.166345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.166575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.166611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.166835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.166869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.167092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.167125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.167324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.167359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.167601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.167635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.167889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.167923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.168163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.168204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.168504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.168539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.168735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.168770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.169029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.169064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.169342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.169384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.169675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.169709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.170018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.170052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.170270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.170306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.170499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.170535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.170744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.170779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.170970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.171004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.171196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.171230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.171493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.171528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.171829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.171864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.172092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.172126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.172430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.172467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.172723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.172757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.172951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.172985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.173268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.173302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.173518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.173554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.173837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.173871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.174087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.174122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.174407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.174443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.174722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.174758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.175040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.175075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-06 15:45:38.175358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-06 15:45:38.175403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.175623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.175658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.175873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.175907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.176163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.176197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.176425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.176461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.176694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.176729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.176923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.176958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.177258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.177292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.177443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.177480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.177754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.177788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.178088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.178122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.178409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.178445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.178655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.178690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.178896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.178932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.179185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.179219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.179442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.179483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.179754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.179789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.180001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.180034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.180232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.180266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.180547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.180583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.180791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.180827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.181123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.181318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.181352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.181489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.181524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.181807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.181842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.182123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.182156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.182436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.182473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.182683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.182717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.182971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.183005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.183202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.183235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.183497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.183533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.183743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.183777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.184017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.184052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.184307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.184342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.184649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.184684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.184944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.184979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.185207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.185240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-06 15:45:38.185536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-06 15:45:38.185572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.185797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.185831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.186106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.186140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.186327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.186360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.186598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.186633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.186840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.186874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.187180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.187213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.187356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.187402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.187678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.187711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.187974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.188007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.188145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.188179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.188392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.188428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.188710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.188744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.188947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.188981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.189233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.189267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.189523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.189560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.189865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.189899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.190146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.190181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.190488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.190529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.190783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.190818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.191099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.191134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.191397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.191433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.191640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.191675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.191936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.191971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.192170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.192204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.192498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.192535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.192733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.192767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.192909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.192943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.193198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.193231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.193526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.193563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.193833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.193867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.194084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.194117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.194331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.194365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.194579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.194614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.194893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.194927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.195129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.195165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.195400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-06 15:45:38.195436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-06 15:45:38.195658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.195692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.195958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.195991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.196265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.196299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.196590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.196626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.196900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.196934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.197153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.197186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.197446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.197483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.197786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.197821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.198106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.198141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.198390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.198426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.198684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.198718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.198924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.198958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.199232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.199267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.199523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.199558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.199762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.199796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.200068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.200101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.200289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.200324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.200514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.200549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.200802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.200837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.201137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.201170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.201456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.201492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.201772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.201811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.202087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.202120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.202327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.202361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.202564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.202598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.202781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.202814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.203089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.203123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.203402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.203438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.203722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.203757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.203903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.203938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.204190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.204226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-06 15:45:38.204494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-06 15:45:38.204529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.204813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.204848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.205093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.205127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.205391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.205427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.205716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.205749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.206022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.206056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.206334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.206396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.206658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.206693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.206908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.206943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.207198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.207234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.207535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.207571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.207852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.207887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.208089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.208123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.208423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.208458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.208723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.208756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.209052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.209087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.209389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.209425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.209711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.209746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.210034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.210068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.210287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.210322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.210651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.210686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.210971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.211005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.211286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.211319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.211604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.211640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.211837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.211872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.212127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.212161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.212363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.212410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.212606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.212640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.212775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.212808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.213085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.213119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.213395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.213437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.213716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.213750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.214024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.214058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.214350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.214402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.214665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.214699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.214958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.214992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.215293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.215326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.215575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-06 15:45:38.215610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-06 15:45:38.215924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.215958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.216261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.216296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.216587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.216623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.216747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.216781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.217009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.217043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.217324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.217358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.217605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.217640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.217940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.217974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.218237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.218271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.218528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.218564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.218871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.218905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.219194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.219229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.219530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.219566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.219829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.219864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.220152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.220185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.220462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.220498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.220778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.220812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.221094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.221129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.221404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.221440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.221725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.221758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.222043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.222078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.222364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.222411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.222664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.222698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.222852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.222887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.223162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.223197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.223446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.223481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.223762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.223797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.224080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.224114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.224387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.224422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.224711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.224746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.225018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.225052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.225193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.225228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.225426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.225467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.225746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.225779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.226056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.226091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.226406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.226443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.226658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.226692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-06 15:45:38.226942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-06 15:45:38.226976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.227257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.227291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.227508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.227545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.227770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.227804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.228105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.228140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.228348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.228393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.228673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.228707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.228929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.228964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.229246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.229281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.229561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.229597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.229880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.229915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.230195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.230228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.230356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.230411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.230600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.230634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.230846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.230881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.231070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.231104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.231356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.231405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.231699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.231733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.231866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.231900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.232175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.232210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.232480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.232517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.232743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.232778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.233006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.233042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.233255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.233289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.233474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.233510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.233767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.233803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.234087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.234121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.234442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.234479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.234747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.234782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.235074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.235110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.235405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.235441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.235707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.235742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.235973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.236007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.236230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.236264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.236549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.236585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.236864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.236904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.237180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.237214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-06 15:45:38.237419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-06 15:45:38.237453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.237760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.237794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.237997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.238032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.238304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.238338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.238616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.238652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.238938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.238973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.239184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.239218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.239497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.239533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.239789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.239823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.240127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.240162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.240420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.240457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.240707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.240742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.240956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.240991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.241247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.241282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.241574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.241610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.241840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.241874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.242179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.242214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.242479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.242515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.242800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.242834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.243109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.243144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.243348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.243392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.243589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.243623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.243879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.243913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.244219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.244252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.244518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.244554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.244816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.244851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.245112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.245146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.245444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.245480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.245765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.245799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.246081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.246116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.246390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.246425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.246607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.246642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.246898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.246933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.247232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.247266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.247482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.247518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-06 15:45:38.247789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-06 15:45:38.247823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.248119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.248153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.248352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.248400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.248587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.248628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.248830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.248864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.249141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.249176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.249431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.249468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.249672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.249705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.249904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-06 15:45:38.250243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-06 15:45:38.250277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.250487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.250523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.250797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.250831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.251086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.251121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.251424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.251461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.251704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.251740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.251935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.251969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.252225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.252260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.252551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.252588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.252795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.252829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.253083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.253117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.253313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.253348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.253567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.253602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.582 [2024-12-06 15:45:38.253879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.582 [2024-12-06 15:45:38.253914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.582 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.254187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.254222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.254398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.254434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.254711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.254745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.255011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.255046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.255255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.255289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.255486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.255523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.255780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.255815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.256092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.256128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.256405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.256441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.256651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.256685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.256872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.256905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.257190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.257224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.257417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.257453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.257756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.257792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.258079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.258113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.258393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.258428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.258575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.258610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.258808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.258843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.259138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.259172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.259465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.259502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.259772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.259813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.260073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.260106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.260399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.260434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.260725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.260761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.261054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.261088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.261275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.261309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.261595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.261631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.261945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.261980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.262178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.262212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.262530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.262567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.262845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.262879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.263163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.263198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.263482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.263517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.263796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.263830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.264036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.264070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.264274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.264309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.264596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.583 [2024-12-06 15:45:38.264633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.583 qpair failed and we were unable to recover it. 00:28:32.583 [2024-12-06 15:45:38.264833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.264868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.264994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.265028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.265279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.265313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.265615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.265650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.265853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.265889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.266175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.266208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.266319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.266352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.266674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.266709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.266992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.267026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.267306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.267340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.267573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.267610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.267895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.267930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.268202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.268235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.268523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.268559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.268784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.268817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.269041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.269075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.269407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.269443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.269723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.269758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.270039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.270073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.270298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.270331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.270614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.270649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.270849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.270885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.271139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.271174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.271295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.271334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.271669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.271706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.271908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.271941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.272199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.272233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.272560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.272595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.272802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.272837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.273064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.273098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.273425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.273698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.273733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.273958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.273992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.274198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.274233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.274508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.274544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.274798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.274831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.275027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.584 [2024-12-06 15:45:38.275061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.584 qpair failed and we were unable to recover it. 00:28:32.584 [2024-12-06 15:45:38.275339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.275381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.275583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.275618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.275897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.275933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.276185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.276220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.276520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.276555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.276844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.276877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.277153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.277188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.277471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.277507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.277789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.277823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.278033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.278069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.278279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.278313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.278514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.278549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.278828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.278862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.279118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.279155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.279296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.279331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.279605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.279641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.279865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.279899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.280163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.280197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.280455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.280492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.280785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.280820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.281118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.281152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.281422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.281457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.281659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.281692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.281947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.281982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.282263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.282297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.282523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.282559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.282780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.282821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.283015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.283050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.283234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.283268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.283456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.283493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.283693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.283728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.283942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.283975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.284228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.284261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.284562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.284599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.284863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.284897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.285176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.285211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.285499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.585 [2024-12-06 15:45:38.285751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.585 [2024-12-06 15:45:38.285786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.585 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.285979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.286014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.286283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.286318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.286473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.286508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.286763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.286797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.287093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.287128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.287343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.287385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.287589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.287624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.287820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.287855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.288048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.288081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.288356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.288403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.288600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.288635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.288861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.288894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.289196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.289230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.289461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.289497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.289712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.289747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.290009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.290045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.290339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.290401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.290680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.290716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.290996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.291030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.291275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.291311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.291612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.291647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.291856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.291893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.292098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.292132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.292390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.292425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.292732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.292766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.292972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.293008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.293282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.293317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.293533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.293570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.293765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.293799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.294014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.294049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.294251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.294285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.294516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.294553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.294705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.294741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.294879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.294913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.295174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.295209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.295406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.295441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.295697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.295732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.586 [2024-12-06 15:45:38.295993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.586 [2024-12-06 15:45:38.296028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.586 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.296332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.296378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.296651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.296687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.296985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.297020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.297224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.297259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.297495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.297534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.297839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.297875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.298132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.298167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.298425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.298462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.298748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.298783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.299007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.299041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.299244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.299279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.299538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.299572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.299780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.299813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.300079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.300114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.300384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.300420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.300682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.300716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.300974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.301009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.301227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.301269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.301586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.301851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.301884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.302028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.302063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.302200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.302236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.302460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.302497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.302752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.302785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.303013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.303049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.303265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.303301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.303622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.303658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.303864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.303899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.304098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.304133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.304357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.304402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.304666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.304706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.587 [2024-12-06 15:45:38.304904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.587 [2024-12-06 15:45:38.304939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.587 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.305162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.305197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.305477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.305516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.305763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.305799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.306096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.306133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.306276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.306312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.306626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.306664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.306861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.307182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.307216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.307494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.307739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.307978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.308018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.308279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.308315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.308598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.308637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.308825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.308863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.309148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.309182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.309413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.309451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.309750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.309790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.309994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.310028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.310232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.310268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.310572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.310607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.310838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.310880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.311155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.311193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.311471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.311508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.311790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.311827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.312105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.312142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.312355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.312415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.312661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.312696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.312904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.312943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.313131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.313166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.313307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.313342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.313611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.313647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.313853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.313889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.314037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.314072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.314282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.314319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.314648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.314688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.314901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.314935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.315079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.315115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.588 qpair failed and we were unable to recover it. 00:28:32.588 [2024-12-06 15:45:38.315389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.588 [2024-12-06 15:45:38.315426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.315709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.315747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.316021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.316059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.316295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.316330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.316553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.316593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.316874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.316909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.317020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.317054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.317244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.317281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.317482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.317524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.317809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.317844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.317986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.318023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.318330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.318378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.318655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.318690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.318980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.319017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.319294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.319333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.319488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.319525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.319735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.319773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.320082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.320118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.320413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.320448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.320748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.320785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.320986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.321021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.321206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.321239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.321349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.321396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.321717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.321750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.322014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.322049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.322345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.322393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.322651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.322684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.322894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.322930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.323193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.323235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.323500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.323535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.323825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.323861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.324158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.324194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.324457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.324493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.324794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.324828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.325090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.325125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.325342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.325388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.325512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.325547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.325675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.325710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-06 15:45:38.325991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-06 15:45:38.326024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.326207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.326241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.326451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.326486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.326770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.327084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.327120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.327322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.327356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.327634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.327669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.327869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.327905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.328092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.328126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.328330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.328364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.328597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.328634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.328915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.328950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.329254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.329288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.329535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.329573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.329727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.329761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.329975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.330009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.330205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.330239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.330499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.330535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.330817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.330852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.331153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.331187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.331447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.331484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.331785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.331819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.332047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.332083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.332390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.332428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.332626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.332662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.332986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.333020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.333304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.333339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.333631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.333667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.333956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.333991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.334210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.334245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.334440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.334483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.334667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.334704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.334981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.335015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.335226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.335261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.335475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.335511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.335794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.335830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.336087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.336457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.336497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-06 15:45:38.336692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-06 15:45:38.336729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.336858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.336892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.337175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.337210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.337468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.337505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.337725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.337762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.338041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.338075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.338285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.338320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.338620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.338659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.338883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.338917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.339068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.339104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.339323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.339357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.339574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.339610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.339812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.339847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.340050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.340087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.340221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.340259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.340448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.340484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.340763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.340801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.341080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.341115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.341255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.341289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.341518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.341554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.341789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.341825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.341966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.342000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.342190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.342226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.342452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.342494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.342788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.342829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.342954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.342990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.343179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.343219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.343444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.343483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.343671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.343706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.343962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.343998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.344190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.344224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.344503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.344539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.344728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.344775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-06 15:45:38.344979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-06 15:45:38.345020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.345213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.345248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.345483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.345521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.345779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.345813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.346006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.346043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.346296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.346331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.346536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.346576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.346777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.346813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.347030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.347067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.347257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.347290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.347433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.347470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.347658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.347691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.347887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.347921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.348130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.348164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.348297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.348334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.348654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.348695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.348888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.348922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.349149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.349183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.349403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.349441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.349588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.349623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.349763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.349798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.350008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.350041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.350317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.350351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.350661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.350704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.350913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.350949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.351257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.351291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.351500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.351537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.351667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.351700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.351906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.351942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.352167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.352209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.352468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.352503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.352815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.352849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.353106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.353141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.353424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.353464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.353679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.353726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.353941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.353979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.354256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.354293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.354557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.354593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-06 15:45:38.354873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-06 15:45:38.354913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.355200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.355252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.355518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.355555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.355831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.355871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.356176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.356212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.356501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.356540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.356752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.356787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.357077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.357115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.357392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.357431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.357736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.357774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.357990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.358027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.358288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.358324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.358630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.358669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.358964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.359008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.359299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.359335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.359671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.359709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.359897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.359932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.360200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.360238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.360521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.360557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.360833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.360871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.361152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.361187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.361467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.361505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.361786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.361821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.362123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.362158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.362354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.362411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.362642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.362679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.362959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.362995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.363226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.363262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.363465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.363503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.363726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.363762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.364068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.364103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.364392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.364429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.364651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.364685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.364887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.364921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.365133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.365168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.365468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.365505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.365723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.365757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.365966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.366001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-06 15:45:38.366255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-06 15:45:38.366290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.366475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.366511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.366652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.366693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.366974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.367021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.367272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.367305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.367544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.367582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.367862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.367897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.368087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.368124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.368320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.368355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.368568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.368603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.368862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.368899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.369096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.369132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.369327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.369362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.369686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.369721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.369953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.369990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.370207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.370242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.370546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.370587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.370788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.370825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.371041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.371076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.371340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.371389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.371675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.371712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.371960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.371996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.372189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.372225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.372481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.372516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.372817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.372854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.373133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.373168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.373362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.373411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.373621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.373656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.373844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.373880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.374108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.374145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.374349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.374396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.374625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.374660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.374964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.374999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.375201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.375236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.375437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.375473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.375632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.375669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.375949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.375987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-06 15:45:38.376261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-06 15:45:38.376296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.376591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.376627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.376882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.376917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.377056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.377092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.377319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.377354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.377551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.377586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.377797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.377838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.378112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.378148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.378288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.378324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.378630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.378666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.378875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.378910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.379183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.379221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.379527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.379568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.379859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.379899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.380108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.380145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.380285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.380324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.380543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.380580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.380780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.380816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.381029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.381071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.381268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.381304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.381510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.381546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.381809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.381845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.382104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.382139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.382427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.382465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.382681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.382724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.382921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.382956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.383239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.383277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.383550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.383588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.383796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.383842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.384053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.384088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.384224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.384266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.384527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.384568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.384797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.384836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.385099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.385137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.385399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.385437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.385588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.385626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.385818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.385856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.386164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.386199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-06 15:45:38.386396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-06 15:45:38.386433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.386662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.386697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.386903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.387222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.387258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.387470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.387509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.387733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.387770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.388039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.388075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.388264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.388306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.388585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.388629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.388845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.388880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.389143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.389178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.389465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.389499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.389748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.389783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.390031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.390065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.390351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.390399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.390688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.390724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.390918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.390954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.391182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.391215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.391434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.391470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.391621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.391656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.391951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.391986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.392272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.392307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.392533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.392569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.392824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.392858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.393052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.393086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.393319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.393353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.393562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.393598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.393848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.393884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.394073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.394107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.394364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.394410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.394598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.394632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.394895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.394929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.395127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.395160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.395300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.395335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-06 15:45:38.395651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-06 15:45:38.395686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.395959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.395998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.396259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.396296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.396582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.396619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.396827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.396862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.397118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.397155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.397458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.397495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.397801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.397839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.398114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.398147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.398347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.398395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.398589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.398627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.398815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.398851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.399128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.399164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.399427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.399465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.399691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.399733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.399874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.399910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.400193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.400228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.400449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.400509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.400822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.400856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.401092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.401126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.401451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.401486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.401671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.401705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.401907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.401942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.402097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.402131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.402419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.402456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.402746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.402783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.403051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.403086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.403279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.403314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.403532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.403569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.403855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.403893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.404125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.404161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.404356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.404405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.404607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.404644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.404921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.404957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.405156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.405192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.405303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.405339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.405641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.405679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.405936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.405970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-06 15:45:38.406165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-06 15:45:38.406200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.406399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.406438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.406645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.406680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.406992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.407028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.407236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.407274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.407424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.407461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.407742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.407778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.407963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.407998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.408254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.408288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.408548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.408583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.408863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.408901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.409181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.409221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.409365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.409412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.409555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.409593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.409801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.409837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.410142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.410177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.410391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.410435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.410700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.410738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.410933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.410967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.411246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.411283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.411565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.411601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.411878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.411915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.412147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.412388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.412424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.412617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.412653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.412930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.412966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.413183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.413219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.413475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.413514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.413820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.413857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.413975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.414009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.414275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.414311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.414603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.414640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.414781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.414817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.415096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.415133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.415319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.415354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.415571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.415606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.415869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.415906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.416091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.416125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-06 15:45:38.416411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-06 15:45:38.416449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.416720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.416758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.417043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.417079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.417311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.417346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.417642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.417679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.417965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.418002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.418238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.418272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.418543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.418580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.418765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.418800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.419054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.419089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.419349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.419396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.419685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.419720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.419878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.419911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.420216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.420253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.420526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.420563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.420715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.420750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.420947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.420984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.421195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.421231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.421490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.421539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.421725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.421759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.422023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.422060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.422244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.422279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.422536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.422574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.422779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.422815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.423002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.423038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.423237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.423271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.423551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.423587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.423773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.423808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.424062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.424097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.424264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.424300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.424539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.424574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.424832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.424867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.425165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.425201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.425494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.425531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.425738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.425772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.425978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.426014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.426197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.426232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.426418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.426455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-06 15:45:38.426666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-06 15:45:38.426701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.426953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.426988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.427261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.427295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.427528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.427564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.427695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.427730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.427938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.427972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.428186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.428221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.428449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.428486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.428682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.428717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.428990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.429025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.429278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.429313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.429537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.429573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.429732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.429767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.429975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.430009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.430287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.430322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.430545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.430581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.430776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.430811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.430954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.430988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.431179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.431215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.431498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.431534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.431719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.431759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.432057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.432091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.432358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.432405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.432684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.432718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.432999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.433033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.433293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.433327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.433630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.433666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.433872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.433907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.434114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.434148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.434403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.434439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.434593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.434628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.434881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.434917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.435100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.435134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.435318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.435352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-06 15:45:38.435635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-06 15:45:38.435671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.435896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.435931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.436195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.436231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.436478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.436514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.436722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.436757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.436951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.436985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.437201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.437235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.437490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.437526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.437734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.437769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.437986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.438021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.438328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.438363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.438679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.438715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.438863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.438898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.439151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.439192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.439473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.439510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.439786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.439820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.440063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.440098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.440298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.440333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.440622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.440659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.440946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.440981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.441239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.441274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.441513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.441549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.441808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.441843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.442071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.442105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.442384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.442421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.442614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.442649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.442892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.442926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.443189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.443223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.443497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.443534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.443789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.443824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.444017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.444052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.444335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.444379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.444671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.444706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.444966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.445000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.445212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.445248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.445534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.445571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.445781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.445816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-06 15:45:38.446029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-06 15:45:38.446064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.446289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.446324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.446472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.446508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.446637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.446947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.446981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.447258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.447293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.447562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.447598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.447920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.447954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.448232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.448268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.448418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.448453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.448645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.448680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.448956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.448991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.449257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.449291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.449503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.449540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.449753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.449788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.450066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.450101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.450361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.450412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.450622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.450658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.450913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.450948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.451164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.451199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.451324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.451358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.451571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.451607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.451884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.451919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.452218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.452252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.452543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.452579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.452853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.452888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.453176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.453210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.453515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.453551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.453781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.453816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.454005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.454040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.454351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.454398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.454541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.454576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.454844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.454879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.455170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.455205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.455504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.455539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.455784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.455820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.456078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.456113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.456419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.456455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-06 15:45:38.456751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-06 15:45:38.456786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.456992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.457027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.457213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.457247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.457534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.457570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.457825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.457859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.458074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.458109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.458384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.458420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.458561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.458597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.458893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.458927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.459153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.459188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.459422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.459458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.459716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.459751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.459958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.459993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.460248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.460282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.460554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.460591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.460851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.460885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.461105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.461139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.461278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.461313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.461530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.461571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.461704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.461739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.461967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.462001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.462253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.462288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.462470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.462505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.462710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.462743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.463011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.463044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.463258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.463292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.463443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.463479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.463758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.463792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.464015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.464049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.464323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.464358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.464672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.464707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.464952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.464988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.465202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.465236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.465513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.465548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.465680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.465714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.465845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.465879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.466182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.466217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.466496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.466531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-06 15:45:38.466733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-06 15:45:38.466767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.466909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.466944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.467226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.467261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.467461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.467496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.467754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.467788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.468065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.468098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.468415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.468451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.468609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.468643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.468840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.468875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.469127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.469161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.469392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.469428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.469711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.469745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.469888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.469922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.470137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.470171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.470390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.470427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.470706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.470741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.471015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.471049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.471335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.471382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.471579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.471614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.471892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.471927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.472138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.472179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.472413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.472449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.472686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.472721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.472914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.472948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.473132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.473167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.473451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.473488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.473677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.473711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.473949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.473983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.474188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.474222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.474450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.474486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.474740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.474775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.475080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.475115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.475423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.475459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.475662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.475698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.475842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.475877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.476067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.476102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.476352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.476401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.476660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.476695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-06 15:45:38.476919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-06 15:45:38.476953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.477215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.477250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.477527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.477562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.477816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.477852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.478068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.478104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.478390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.478702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.478740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.478970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.479006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.479200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.479235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.479461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.479499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.479782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.479821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.480041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.480076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.480403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.480438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.480648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.480683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.480818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.480852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.481148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.481182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.481463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.481499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.481683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.481718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.481871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.481905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.482167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.482201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.482460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.482496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.482779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.482813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.483092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.483132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.483409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.483445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.483674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.483708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.483934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.483968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.484253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.484288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.484498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.484534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.484749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.484783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.484979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.485014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.485158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.485192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.485489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.485525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.485802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.485837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.486038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.486073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-06 15:45:38.486283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-06 15:45:38.486318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.486551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.486587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.486780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.486814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.487038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.487072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.487270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.487303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.487621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.487656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.487903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.487937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.488232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.488265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.488477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.488513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.488791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.488826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.488972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.489006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.489229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.489263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.489457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.489493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.489798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.489832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.490116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.490151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.490356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.490403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.490685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.490719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.490991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.491025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.491250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.491284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.491510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.491546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.491823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.492125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.492161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.492458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.492494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.492782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.492815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.493033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.493066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.493319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.493354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.493635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.493670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.493810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.493844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.494050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.494090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.494390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.494426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.494632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.494667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.494920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.494955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.495140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.495175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.495461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.495497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.495808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.495845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.496141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.496177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.496307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.496342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.496604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.496640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-06 15:45:38.496847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-06 15:45:38.496881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.497146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.497180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.497383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.497419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.497631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.497666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.497968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.498004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.498265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.498300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.498526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.498562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.498693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.498729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.498955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.498991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.499173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.499210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.499421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.499457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.499741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.499776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.499892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.499924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.500195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.500230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.500448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.500485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.500745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.500780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.500995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.501031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.501238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.501277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.501477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.501513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.501798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.501833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.502126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.502161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.502441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.502479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.502763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.502799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.503073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.503108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.503423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.503460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.503708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.503743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.504043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.504078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.504297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.504332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.504649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.504686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.504835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.504870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.505194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.505235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.505497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.505533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.505789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.505824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.506034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.506068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.506305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.506339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.506543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.506578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.506836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.506870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.507021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.507056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-06 15:45:38.507310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-06 15:45:38.507345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.507583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.507618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.507800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.507835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.507980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.508015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.508248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.508282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.508541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.508578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.508732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.508766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.508959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.508994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.509260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.509293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.509493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.509528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.509750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.509784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.509989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.510024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.510224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.510258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.510486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.510522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.510804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.510838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.511121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.511155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.511414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.511449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.511725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.511760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.512070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.512104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.512392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.512429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.512703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.512737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.512959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.513299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.513333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.513488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.513525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.513791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.513824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.514114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.514148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.514423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.514459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.514669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.514704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.514889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.514924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.515197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.515232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.515552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.515588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.515789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.515822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.516023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.516063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.516259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.516294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.516491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.516527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.516785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.516821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.517009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.517043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.517327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.517362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-06 15:45:38.517576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-06 15:45:38.517612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.517890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.517925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.518122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.518157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.518309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.518343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.518561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.518598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.518810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.518844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.519057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.519091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.519295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.519329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.519552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.519588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.519893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.519927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.520184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.520219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.520438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.520475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.520684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.520718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.520923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.520958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.521257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.521292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.521441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.521476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.521586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.521621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.521839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.521874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.522077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.522111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.522298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.522332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.522554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.522590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.522725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.522761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.522970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.523005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.523285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.523320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.523603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.523639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.523918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.523953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.524216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.524251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.524448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.524484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.524751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.524786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.525015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.525050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.525337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.525381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.525593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.525628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.525880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.525915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.526213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.526248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.526524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.526565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.526761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.526794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.527061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.527094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.527379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-06 15:45:38.527415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-06 15:45:38.527694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.527729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.527934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.527970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.528167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.528202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.528412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.528448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.528644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.528679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.528930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.528964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.529270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.529304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.529515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.529550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.529840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.529874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.530147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.530181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.530454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.530491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.530695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.530730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.530989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.531024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.531282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.531317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.531630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.531667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.531886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.531921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.532175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.532210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.532519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.532555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.532698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.532733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.533013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.533048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.533344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.533389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.533675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.533709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.533977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.534012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.534282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.534317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.534616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.534651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.534928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.534963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.535217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.535252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.535486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.535522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.535744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.535778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.535973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.536008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.536261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.536296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.536451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.536488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.536690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.536725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-06 15:45:38.536864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.610 [2024-12-06 15:45:38.536899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.537177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.537212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.537331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.537379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.537677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.537719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.537920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.537955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.538164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.538198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.538388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.538424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.538620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.538655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.538859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.538893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.539031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.539066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.539253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.539289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.539494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.539529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.539721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.539755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.539966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.540003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.540262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.540298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.540526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.540563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.540763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.540799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.540995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.541030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.541185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.541220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.541527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.541564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.541825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.541860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.542092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.542129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.542336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.542382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.542591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.542627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.542820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.542855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.543062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.543096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.543307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.543343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.543549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.543585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.543867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.543903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.544046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.544080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.544422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.544462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.544622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.544663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.544821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.544857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.544975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.545009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.545216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.545252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.545460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.545496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.545639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.545677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.545883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.545919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-12-06 15:45:38.546104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.611 [2024-12-06 15:45:38.546137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.546281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.546317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.546459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.546494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.546708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.546743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.546889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.546924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.547119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.547160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.547421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.547457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.547674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.547707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.547899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.547934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.548191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.548227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.548412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.548448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.548585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.548620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.548813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.548850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.549109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.549144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.549383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.549423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.549558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.549593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.549716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.549750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.549944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.549979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.550186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.550222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.550415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.550453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.550653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.550689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.550898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.550932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.551081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.551116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.551313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.551348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.551472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.551507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.551788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.551824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.552022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.552057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.552249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.552285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.552554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.552602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.552840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.552874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.553071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.553108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.553298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.553333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.553546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.553582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.553803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.553838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.554056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.554098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.554229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.554263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.554419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.554459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.554592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.554626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.612 qpair failed and we were unable to recover it. 00:28:32.612 [2024-12-06 15:45:38.554850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.612 [2024-12-06 15:45:38.554894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.555179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.555212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.555399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.555435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.555660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.555697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.555896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.555937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.556180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.556214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.556471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.556510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.556764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.556806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.556943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.556979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.557188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.557225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.557428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.557467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.557733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.557769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.557903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.557939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.558083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.558120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.558303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.558337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.558551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.558592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.558872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.558912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.559119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.559153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.559407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.559445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.559630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.559667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.559920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.559956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.560111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.560145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.560331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.560379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.560606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.560646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.560839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.560874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.561073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.613 [2024-12-06 15:45:38.561116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.613 qpair failed and we were unable to recover it. 00:28:32.613 [2024-12-06 15:45:38.561305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.561339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.561492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.561529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.561732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.561766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.561995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.562031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.562215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.562256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.562476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.562512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.562663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.562700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.562957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.562993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.563189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.563224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.563443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.563479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.563681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.563718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.563924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.563958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.564172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.564207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.907 qpair failed and we were unable to recover it. 00:28:32.907 [2024-12-06 15:45:38.564457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.907 [2024-12-06 15:45:38.564493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.564749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.564784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.564992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.565031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.565248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.565286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.565542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.565580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.565708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.565743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.565954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.565986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.566134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.566169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.566362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.566416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.566631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.566665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.566884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.566920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.567054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.567089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.567286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.567318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.567540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.567578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.567785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.567818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.567950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.567985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.568114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.568148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.568254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.568288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.568483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.568519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.568666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.568702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.568819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.568852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.569002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.569037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.569220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.569257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.569381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.569417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.569671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.569705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.569859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.569894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.570026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.570061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.570205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.570241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.908 [2024-12-06 15:45:38.570393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.908 [2024-12-06 15:45:38.570429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.908 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.570628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.570662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.570910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.570943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.571136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.571171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.571451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.571491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.571701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.571738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.571938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.571975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.572173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.572214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.572340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.572390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.572530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.572564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.573923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.573985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.574163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.574201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.574481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.574518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.574715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.574749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.574899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.574933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.575191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.575225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.575448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.575482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.575636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.575670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.575860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.575893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.576090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.576124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.576430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.576466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.576607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.576640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.576830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.576863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.577140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.577175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.577329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.577365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.577637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.577671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.577873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.577907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.578048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.578083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.578330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.578363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.909 [2024-12-06 15:45:38.578522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.909 [2024-12-06 15:45:38.578554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.909 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.578671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.578705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.578909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.578942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.579183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.579216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.579351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.579400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.579596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.579630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.579813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.579846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.580056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.580090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.580289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.580324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.580534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.580569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.580692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.580726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.580919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.580952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.581150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.581183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.581422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.581460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.581720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.581754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.581952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.581987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.582118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.582152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.582453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.582489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.582683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.582737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.582923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.582956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.583153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.583189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.583364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.583411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.583600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.583634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.583745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.583779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.583978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.584012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.584235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.584269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.584461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.584497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.584678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.584711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.584903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.584937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.585056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.585091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.910 [2024-12-06 15:45:38.585206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.910 [2024-12-06 15:45:38.585239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.910 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.585386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.585421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.585538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.585575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.585709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.585742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.585881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.585918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.586191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.586225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.586343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.586422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.586624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.586660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.586799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.586835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.587063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.587313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.587349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.587502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.587535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.587737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.587772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.587920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.587955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.588295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.588327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.588554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.588592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.588724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.588758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.588974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.589007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.589125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.589159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.589301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.589336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.589545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.589580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.589712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.589747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.589879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.589914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.590220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.590416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.590458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.590713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.590748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.590892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.590927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.591046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.591081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.911 [2024-12-06 15:45:38.591269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.911 [2024-12-06 15:45:38.591308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.911 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.591569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.591605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.591731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.591763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.591960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.591993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.592180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.592214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.592462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.592499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.592778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.592811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.592997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.593031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.593222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.593258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.593485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.593521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.593719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.593752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.593890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.593924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.594039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.594073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.594195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.594230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.594511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.594547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.594763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.594796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.595049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.595083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.595399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.595433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.595633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.595667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.595851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.595887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.596023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.596059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.596248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.596282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.596471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.596506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.596701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.596735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.596949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.596984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.597284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.597317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.597443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.597477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.912 [2024-12-06 15:45:38.597677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.912 [2024-12-06 15:45:38.597711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.912 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.597920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.597954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.598093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.598126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.598338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.598396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.598529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.598564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.598750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.598785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.599032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.599066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.599252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.599300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.599568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.599605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.599787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.599824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.599956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.599990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.600119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.600155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.600425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.600465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.600604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.600655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.600911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.600948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.601125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.601163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.601278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.601313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.601456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.601493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.601695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.601729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.601901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.601935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.602065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.602102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.602295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.602336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.602607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.602644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.602781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.602817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.603087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.603123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.913 [2024-12-06 15:45:38.603247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.913 [2024-12-06 15:45:38.603282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.913 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.603534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.603573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.603851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.603890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.604015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.604048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.604243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.604280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.604407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.604440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.604638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.604672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.604847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.604880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.605119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.605155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.605268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.605310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.605462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.605499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.605691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.605724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.605855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.605892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.606031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.606065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.606198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.606230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.606429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.606476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.606660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.606695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.606847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.606891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.607086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.607123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.607250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.607284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.607425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.607463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.607581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.607615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.607799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.607833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.607948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.607983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.608156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.608189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.608297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.608332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.608468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.608523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.608730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.608768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.608957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.608999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.914 qpair failed and we were unable to recover it. 00:28:32.914 [2024-12-06 15:45:38.609203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.914 [2024-12-06 15:45:38.609238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.609391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.609428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.609607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.609641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.609891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.609928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.610076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.610120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.610302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.610336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.610566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.610608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.610761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.610794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.610968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.611002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.611184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.611223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.611404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.611441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.611569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.611612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.611835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.611868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.612057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.612092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.612290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.612324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.612610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.612645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.612769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.612801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.613026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.613060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.613254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.613289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.613488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.613524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.613792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.613826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.614102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.614135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.614326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.614360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.614497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.614547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.614687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.614720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.614912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.614946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.615151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.615185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.615355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.615401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.615582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.615615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.615752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.915 [2024-12-06 15:45:38.615786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.915 qpair failed and we were unable to recover it. 00:28:32.915 [2024-12-06 15:45:38.615972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.616006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.616180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.616213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.616533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.616586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.616784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.616816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.616946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.616979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.617149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.617184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.617380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.617414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.617688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.617722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.617854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.617891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.618067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.618111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.618236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.618271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.618418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.618456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.618632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.618668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.618926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.618960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.619155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.619187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.619314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.619351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.619555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.619595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.619708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.619742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.619999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.620035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.620277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.620311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.620580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.620615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.620828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.620864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.621001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.621035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.621280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.621314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.621520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.621561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.621684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.621718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.621896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.621929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.622121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.622154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.622353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.622399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.622666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.622704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.916 [2024-12-06 15:45:38.622818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.916 [2024-12-06 15:45:38.622853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.916 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.623148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.623184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.623307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.623342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.623487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.623521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.623771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.623806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.624001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.624035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.624148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.624187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.624302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.624335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.624534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.624570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.624725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.624760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.625015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.625050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.625322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.625355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.625499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.625534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.625755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.625793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.626055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.626092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.626220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.626262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.626414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.626450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.626589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.626623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.626739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.626772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.626924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.626970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.627200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.627232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.627415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.627451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.627561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.627595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.627811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.627847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.627987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.628037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.628168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.628205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.628346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.628540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.628573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.628756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.917 [2024-12-06 15:45:38.628789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.917 qpair failed and we were unable to recover it. 00:28:32.917 [2024-12-06 15:45:38.628985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.629018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.629202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.629236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.629430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.629470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.629604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.629638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.629786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.629823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.630107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.630141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.630326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.630358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.630499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.630533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.630729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.630763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.630894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.630933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.631159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.631195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.631391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.631434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.631559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.631591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.631787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.631821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.632065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.632098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.632231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.632263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.632480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.632520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.632732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.632768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.632895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.632928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.633106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.633141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.633391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.633426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.633605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.633638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.633764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.633799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.634004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.634039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.634179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.634223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.634422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.634458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.634666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.634699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.918 qpair failed and we were unable to recover it. 00:28:32.918 [2024-12-06 15:45:38.634825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.918 [2024-12-06 15:45:38.634861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.635051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.635085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.635277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.635311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.635439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.635481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.635669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.635701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.635824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.635858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.636034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.636069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.636238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.636270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.636518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.636552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.636693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.636726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.636912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.636946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.637142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.637176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.637360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.637407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.637589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.637623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.637799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.637833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.638049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.638100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.638239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.638273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.638465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.638500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.638698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.638731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.638911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.638945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.639120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.639152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.639343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.639406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.919 qpair failed and we were unable to recover it. 00:28:32.919 [2024-12-06 15:45:38.639595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.919 [2024-12-06 15:45:38.639627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.639867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.639899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.640101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.640133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.640400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.640434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.640562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.640595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.640786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.640818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.640997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.641029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.641159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.641191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.641391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.641426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.641618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.641650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.641780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.641814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.642002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.642037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.642233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.642267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.642407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.642440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.642576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.642609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.642730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.642764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.642879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.642913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.643021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.643054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.643182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.643214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.643416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.643452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.643658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.643690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.643810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.643849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.644034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.644067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.644281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.644316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.644461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.644495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.644676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.644709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.644928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.644962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.645086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.645120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.645307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.645341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.645477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.645512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.645631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.920 [2024-12-06 15:45:38.645664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.920 qpair failed and we were unable to recover it. 00:28:32.920 [2024-12-06 15:45:38.645785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.645818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.645933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.645965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.646153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.646187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.646388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.646423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.646734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.646768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.646899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.646933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.647149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.647182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.647428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.647463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.647593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.647627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.647811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.647843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.647962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.647995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.648161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.648195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.648310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.648342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.648533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.648566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.648691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.648726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.648899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.648932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.649145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.649179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.649359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.649447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.649659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.649696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.649875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.649910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.650096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.650132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.650329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.650363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.650554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.650588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.650772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.650806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.650997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.651031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.651159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.651192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.651301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.651335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.651544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.651579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.651759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.921 [2024-12-06 15:45:38.651793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.921 qpair failed and we were unable to recover it. 00:28:32.921 [2024-12-06 15:45:38.651966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.651999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.652139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.652183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.652381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.652416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.652518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.652551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.652738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.652779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.653048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.653082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.653269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.653303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.653487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.653521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.653768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.653802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.653913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.653946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.654084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.654116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.654238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.654279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.654548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.654583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.654706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.654740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.654936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.654970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.655101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.655135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.655303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.655335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.655480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.655514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.655782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.655814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.656078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.656113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.656279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.656312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.656495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.656529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.656643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.656676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.656928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.656961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.657106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.657141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.657334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.657378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.657577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.657610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.657882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.657920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.922 [2024-12-06 15:45:38.658150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.922 [2024-12-06 15:45:38.658184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.922 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.658361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.658420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.658638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.658671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.658856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.658891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.659072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.659105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.659294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.659329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.659512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.659548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.659685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.659719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.659899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.659933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.660127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.660162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.660331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.660377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.660594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.660628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.660826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.660997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.661037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.661279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.661311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.661494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.661531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.661774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.661806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.662001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.662036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.662214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.662246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.662384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.662419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.662613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.662646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.662772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.662807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.663038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.663072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.663260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.663294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.663440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.663476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.663601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.663635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.663819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.663853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.664003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.664036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.664163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.923 [2024-12-06 15:45:38.664195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.923 qpair failed and we were unable to recover it. 00:28:32.923 [2024-12-06 15:45:38.664303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.664337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.664532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.664565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.664755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.664788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.664902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.664938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.665125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.665159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.665339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.665381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.665579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.665613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.665908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.665941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.666078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.666110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.666318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.666351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.666534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.666567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.666805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.666879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.667105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.667141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.667278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.667312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.667543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.667579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.667759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.667793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.667972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.668006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.668187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.668221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.668418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.668452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.668651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.668684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.668865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.668899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.669137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.669170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.669358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.669401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.669597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.669631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.669757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.669799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.669989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.670025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.670221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.670254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.670384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.670418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.670617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.670649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.924 qpair failed and we were unable to recover it. 00:28:32.924 [2024-12-06 15:45:38.670789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.924 [2024-12-06 15:45:38.670822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.670993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.671025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.671144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.671177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.671286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.671318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.671457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.671493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.671673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.671706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.671813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.671846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.672023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.672056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.672243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.672276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.672471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.672507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.672772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.672805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.673048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.673083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.673203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.673236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.673411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.673446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.673560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.673595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.673783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.673815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.673944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.673978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.674150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.674184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.674452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.674485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.674671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.674706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.674914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.674948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.675067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.675100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.675352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.675418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.675607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.675641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.675759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.675792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.925 qpair failed and we were unable to recover it. 00:28:32.925 [2024-12-06 15:45:38.676046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.925 [2024-12-06 15:45:38.676080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.676213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.676247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.676511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.676547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.676835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.676870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.677039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.677086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.677236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.677271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.677400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.677434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.677559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.677592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.677806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.677847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.678040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.678073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.678347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.678398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.678579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.678615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.678821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2187b20 is same with the state(6) to be set 00:28:32.926 [2024-12-06 15:45:38.678984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.679021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.679292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.679325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.679577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.679613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.679734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.679767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.679891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.679924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.680123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.680156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.680338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.680384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.680554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.680588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.680771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.680804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.680952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.680984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.681223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.681255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.681390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.681430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.681617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.681649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.681832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.681865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.682110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.682143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.682325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.682358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.682490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.682524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.682730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.682763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.926 qpair failed and we were unable to recover it. 00:28:32.926 [2024-12-06 15:45:38.682905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.926 [2024-12-06 15:45:38.682939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.683204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.683239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.683386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.683420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.683658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.683692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.683884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.683919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.684104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.684137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.684386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.684421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.684612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.684644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.684758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.684791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.684965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.684997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.685189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.685223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.685333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.685365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.685513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.685547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.685727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.685760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.685872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.685906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.686099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.686131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.686251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.686284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.686418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.686453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.686628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.686661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.686902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.686936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.687055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.687087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.687209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.687244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.687424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.687459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.687719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.687752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.687939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.687972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.688150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.688184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.688387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.688422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.688646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.688678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.927 [2024-12-06 15:45:38.688948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.927 [2024-12-06 15:45:38.688982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.927 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.689187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.689220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.689442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.689476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.689618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.689653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.689884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.689915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.690163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.690208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.690332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.690364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.690527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.690562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.690686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.690720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.690909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.690942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.691180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.691213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.691348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.691401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.691681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.691714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.691984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.692018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.692138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.692171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.692364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.692409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.692690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.692723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.692896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.692930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.693044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.693076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.693355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.693402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.693535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.693568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.693812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.693846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.694085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.694118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.694272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.694306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.694497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.694531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.694709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.694742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.694963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.694996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.695228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.695262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.928 [2024-12-06 15:45:38.695390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.928 [2024-12-06 15:45:38.695426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.928 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.695664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.695698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.695866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.695899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.696019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.696053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.696284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.696356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.696585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.696623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.696804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.696839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.697056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.697092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.697301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.697336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.697611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.697649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.697888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.697923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.698164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.698198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.698437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.698472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.698712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.698747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.698891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.698925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.699186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.699220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.699481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.699592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.699636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.699791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.699824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.700019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.700054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.700239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.700275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.700461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.700497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.700805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.700839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.701024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.701059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.701185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.701217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.701403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.701437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.701615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.701648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.701771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.701805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.929 qpair failed and we were unable to recover it. 00:28:32.929 [2024-12-06 15:45:38.701988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.929 [2024-12-06 15:45:38.702022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.702217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.702251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.702527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.702560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.702761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.702794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.702972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.703005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.703116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.703150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.703338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.703383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.703523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.703559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.703734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.703768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.704031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.704064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.704246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.704280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.704401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.704435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.704575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.704610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.704738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.704773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.704956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.704989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.705106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.705140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.705365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.705446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.705734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.705774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.705957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.705991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.706270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.706303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.706587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.706623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.706830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.706867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.707120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.707154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.707337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.707381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.707558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.707593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.707788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.707823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.708009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.708043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.708148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.708182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.708302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.708339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.930 qpair failed and we were unable to recover it. 00:28:32.930 [2024-12-06 15:45:38.708592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.930 [2024-12-06 15:45:38.708627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.708760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.708794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.708984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.709018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.709136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.709168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.709303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.709338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.709643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.709683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.709816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.709851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.710104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.710140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.710324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.710360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.710558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.710592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.710713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.710748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.710920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.710954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.711132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.711166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.711364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.711409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.711610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.711647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.711782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.711817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.711995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.712029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.712164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.712197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.712381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.712437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.712634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.712667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.712946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.712979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.713237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.713270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.713495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.713529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.713656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.713691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.931 qpair failed and we were unable to recover it. 00:28:32.931 [2024-12-06 15:45:38.713816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.931 [2024-12-06 15:45:38.713851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.714030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.714063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.714272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.714314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.714502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.714549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.714726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.714759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.714939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.714972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.715175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.715208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.715388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.715423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.715596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.715630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.715746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.715779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.716034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.716067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.716194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.716228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.716338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.716381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.716559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.716593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.716785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.716819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.717005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.717038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.717246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.717281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.717478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.717519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.717650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.717683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.717865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.717899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.718040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.718073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.718270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.718303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.718492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.718527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.718714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.718747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.718868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.718900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.719004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.719037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.719207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.719240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.719410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.719445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.719631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.719666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.932 [2024-12-06 15:45:38.719852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.932 [2024-12-06 15:45:38.719886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.932 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.720084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.720119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.720224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.720259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.720537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.720571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.720776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.720811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.720932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.720966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.721083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.721118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.721257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.721290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.721477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.721513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.721761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.721794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.722033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.722065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.722258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.722292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.722529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.722565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.722801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.722834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.723031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.723072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.723198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.723232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.723359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.723401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.723598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.723632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.723879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.723915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.724107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.724140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.724321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.724353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.724602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.724636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.724854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.724887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.725082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.725115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.725279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.725314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.725537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.725573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.725729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.725766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.725893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.725927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.726193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.726227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.933 qpair failed and we were unable to recover it. 00:28:32.933 [2024-12-06 15:45:38.726363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.933 [2024-12-06 15:45:38.726410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.726523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.726556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.726690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.726724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.727003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.727037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.727226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.727284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.727400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.727435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.727633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.727670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.727848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.727883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.728026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.728072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.728325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.728360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.728556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.728591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.728780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.728816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.729022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.729070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.729259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.729291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.729409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.729443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.729683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.729716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.729829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.729863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.730117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.730203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.730518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.730557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.730747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.730780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.730907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.730941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.731185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.731218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.731401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.731436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.731539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.731573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.731742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.731775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.731959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.732001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.732117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.732150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.732364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.732412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.732654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.732688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.732817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.934 [2024-12-06 15:45:38.732850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.934 qpair failed and we were unable to recover it. 00:28:32.934 [2024-12-06 15:45:38.733031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.733063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.733247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.733281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.733545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.733580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.733752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.733785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.733933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.733966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.734089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.734121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.734245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.734278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.734485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.734519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.734702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.734736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.734868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.734901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.735076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.735110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.735291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.735324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.735458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.735494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.735666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.735699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.735816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.735846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.735957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.735990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.736205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.736239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.736413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.736448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.736636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.736672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.736795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.736828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.736941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.736973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.737151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.737183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.737455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.737491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.737766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.737886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.737919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.738036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.738069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.738182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.738214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.738344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.738387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.935 [2024-12-06 15:45:38.738636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.935 [2024-12-06 15:45:38.738670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.935 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.738791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.738825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.738965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.738997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.739134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.739166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.739413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.739448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.739657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.739689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.739807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.739840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.740024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.740057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.740246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.740279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.740477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.740511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.740695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.740729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.740857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.740891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.741081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.741115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.741305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.741337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.741555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.741589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.741700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.741732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.741925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.741958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.742069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.742103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.742230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.742264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.742384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.742419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.742644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.742678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.742903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.742936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.743049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.743083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.743261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.743294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.743419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.743456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.743643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.743677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.743859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.743892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.744011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.744044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.744224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.744256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.744387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.936 [2024-12-06 15:45:38.744423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.936 qpair failed and we were unable to recover it. 00:28:32.936 [2024-12-06 15:45:38.744595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.744627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.744751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.744785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.744981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.745015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.745125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.745158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.745285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.745325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.745448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.745481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.745671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.745704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.745962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.745995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.746248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.746280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.746422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.746459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.746707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.746741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.746864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.746896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.747172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.747347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.747393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.747647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.747679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.747784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.747819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.748083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.748117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.748385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.748422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.748616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.748649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.748819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.748853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.748967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.748999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.749131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.749166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.749406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.749441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.749615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.937 [2024-12-06 15:45:38.749648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.937 qpair failed and we were unable to recover it. 00:28:32.937 [2024-12-06 15:45:38.749851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.749885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.750128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.750162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.750284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.750318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.750498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.750536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.750642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.750676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.750959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.750992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.751258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.751292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.751421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.751458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.751586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.751620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.751742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.751775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.752039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.752072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.752214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.752248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.752444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.752477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.752653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.752687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.752861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.752893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.753087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.753121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.753304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.753338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.753485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.753519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.753651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.753683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.753856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.753892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.754009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.754048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.754261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.754295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.754421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.754455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.754718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.754753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.754948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.754982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.755166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.755200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.755442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.755477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.755596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.755629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.755733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.755767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.938 [2024-12-06 15:45:38.755973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.938 [2024-12-06 15:45:38.756007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.938 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.756181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.756215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.756399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.756434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.756641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.756675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.756781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.756814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.757017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.757051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.757290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.757324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.757459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.757492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.757666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.757700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.757959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.757992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.758105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.758138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.758321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.758354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.758554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.758587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.758702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.758735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.758863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.758897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.759024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.759056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.759237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.759271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.759533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.759567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.759679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.759711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.759895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.759929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.760168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.760201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.760390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.760424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.760538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.760570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.760815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.760847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.760980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.761013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.761250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.761283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.761394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.761427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.761542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.761574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.761757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.761791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.762002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.762034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.762314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.939 [2024-12-06 15:45:38.762347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.939 qpair failed and we were unable to recover it. 00:28:32.939 [2024-12-06 15:45:38.762554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.762593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.762783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.762815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.763023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.763057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.763174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.763207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.763393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.763428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.763608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.763641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.763813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.763845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.764088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.764121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.764237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.764270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.764546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.764580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.764847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.764880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.765012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.765044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.765214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.765248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.765417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.765452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.765603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.765637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.765902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.765935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.766040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.766073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.766266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.766298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.766423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.766458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.766655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.766689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.766878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.766910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.767092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.767125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.767298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.767331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.767579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.767614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.767787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.767820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.768027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.768059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.768168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.768201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.768448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.940 [2024-12-06 15:45:38.768482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.940 qpair failed and we were unable to recover it. 00:28:32.940 [2024-12-06 15:45:38.768739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.768773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.768897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.768930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.769129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.769162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.769423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.769457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.769644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.769676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.769789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.769821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.769996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.770029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.770268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.770300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.770561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.770595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.770854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.770887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.771065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.771098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.771343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.771385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.771568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.771607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.771788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.771821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.772015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.772048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.772222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.772257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.772522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.772557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.772729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.772762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.772964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.772999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.773262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.773294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.773479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.773513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.773704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.773737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.774009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.774041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.774240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.774272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.774411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.774446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.774573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.774605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.774783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.774817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.941 [2024-12-06 15:45:38.775080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.941 [2024-12-06 15:45:38.775114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.941 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.775299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.775331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.775577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.775612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.775892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.775925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.776055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.776088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.776300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.776333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.776531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.776564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.776818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.776852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.777030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.777063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.777300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.777333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.777450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.777485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.777677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.777711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.777843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.777876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.778117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.778149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.778285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.778319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.778526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.778561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.778758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.778791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.779055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.779088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.779193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.779225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.779339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.779382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.779557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.779591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.779804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.779837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.780077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.780110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.780319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.780351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.780553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.780586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.780709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.780748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.781014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.781048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.781260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.781293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.942 [2024-12-06 15:45:38.781412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.942 [2024-12-06 15:45:38.781454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.942 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.781657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.781690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.781921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.781954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.782150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.782183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.782421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.782456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.782588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.782621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.782797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.782829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.783000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.783033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.783201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.783235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.783412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.783446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.783564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.783597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.783729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.783762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.784023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.784055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.784173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.784206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.784421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.784455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.784641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.784675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.784863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.784895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.785080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.785113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.785245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.785278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.785449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.785483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.785622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.785654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.785773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.785806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.785979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.786012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.786202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.786236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.786422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.786458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.943 qpair failed and we were unable to recover it. 00:28:32.943 [2024-12-06 15:45:38.786638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.943 [2024-12-06 15:45:38.786671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.786782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.786815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.786999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.787031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.787258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.787291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.787477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.787511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.787629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.787662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.787875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.787907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.788015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.788048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.788226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.788258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.788441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.788475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.788650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.788684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.788803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.788835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.788972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.789010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.789123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.789155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.789394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.789428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.789562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.789596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.789735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.789768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.789883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.789917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.790179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.790213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.790454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.790490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.790676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.790709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.790819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.790850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.790977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.791010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.791132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.791165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.791430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.791465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.791591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.791625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.791752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.791785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.791960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.791994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.792256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.792289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.792475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.792510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.944 qpair failed and we were unable to recover it. 00:28:32.944 [2024-12-06 15:45:38.792615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.944 [2024-12-06 15:45:38.792648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.792830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.792862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.793137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.793171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.793312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.793345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.793550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.793586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.793835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.793867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.794056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.794089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.794263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.794295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.794423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.794458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.794656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.794690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.794809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.794841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.795104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.795137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.795335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.795379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.795562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.795597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.795704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.795737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.795864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.795897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.796016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.796050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.796302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.796334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.796598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.796634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.796832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.796865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.797129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.797162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.797333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.797374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.797564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.797603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.797742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.797776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.797904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.797937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.798127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.798160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.798272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.798306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.798583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.798619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.798794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.798828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.945 qpair failed and we were unable to recover it. 00:28:32.945 [2024-12-06 15:45:38.799004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.945 [2024-12-06 15:45:38.799039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.799249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.799283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.799476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.799511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.799642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.799674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.799880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.799915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.800107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.800140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.800332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.800375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.800573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.800608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.800876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.800910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.801180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.801214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.801386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.801420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.801664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.801699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.801834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.801868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.802116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.802151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.802374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.802409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.802548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.802581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.802703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.802736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.802949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.802984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.803177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.803210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.803345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.803398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.803520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.803554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.803822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.803857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.804031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.804066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.804261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.804295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.804402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.804438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.804628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.804662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.804945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.804978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.805243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.805276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.805504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.946 [2024-12-06 15:45:38.805539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.946 qpair failed and we were unable to recover it. 00:28:32.946 [2024-12-06 15:45:38.805668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.805701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.805819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.805853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.805965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.805999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.806107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.806141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.806266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.806304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.806480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.806515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.806704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.806736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.806854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.806887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.807075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.807110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.807248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.807280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.807506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.807540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.807680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.807713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.807837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.807872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.808048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.808081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.808200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.808232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.808505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.808541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.808727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.808762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.809010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.809042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.809173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.809207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.809423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.809456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.809638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.809672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.809788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.809822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.810086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.810120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.810292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.810326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.810536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.810571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.810752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.810785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.810988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.811020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.811213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.811247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.811516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.811550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.811730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.947 [2024-12-06 15:45:38.811763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.947 qpair failed and we were unable to recover it. 00:28:32.947 [2024-12-06 15:45:38.811967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.812001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.812190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.812224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.812409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.812444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.812574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.812607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.812855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.812888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.813014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.813049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.813172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.813205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.813320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.813353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.813627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.813662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.813870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.813904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.814171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.814204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.814337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.814376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.814505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.814538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.814730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.814763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.814894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.814933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.815107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.815141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.815248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.815281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.815388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.815423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.815606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.815641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.815883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.815918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.816020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.816054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.816232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.816265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.816435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.816490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.816606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.816640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.816897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.816931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.817116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.817149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.817337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.817379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.817585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.817618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.817762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.817796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.818055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.818088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.818264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.948 [2024-12-06 15:45:38.818297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.948 qpair failed and we were unable to recover it. 00:28:32.948 [2024-12-06 15:45:38.818475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.818509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.818693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.818729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.818904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.818938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.819195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.819228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.819490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.819525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.819632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.819666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.819908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.819941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.820128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.820161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.820347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.820395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.820524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.820557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.820715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.820791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.820999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.821037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.821225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.821260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.821457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.821498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.821681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.821718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.821856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.821890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.822074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.822111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.822234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.822268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.822462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.822499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.822709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.822743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.822867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.822900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.823095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.823130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.823261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.823302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.823502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.823539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.823794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.823829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.824029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.824064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.824262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.824295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.824551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.824589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.824850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.824888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.825090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.825126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.825304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.825339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.825619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.825654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.949 [2024-12-06 15:45:38.825769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.949 [2024-12-06 15:45:38.825803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.949 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.825917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.825952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.826175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.826212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.826396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.826430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.826618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.826652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.826790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.826829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.827026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.827060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.827251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.827287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.827474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.827510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.827647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.827684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.827803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.827838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.828017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.828054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.828230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.828263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.828392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.828428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.828639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.828674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.828939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.828975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.829099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.829132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.829320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.829355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.829626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.829660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.829879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.829915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.830112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.830151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.830281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.830316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.830598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.830633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.830817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.830851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.831029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.831062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.831254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.831289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.831414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.831450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.831634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.831668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.831841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.831876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.832127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.832160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.832353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.832399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.832645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.832679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.832804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.832843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.832965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.833000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.833196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.833230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.833358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.833404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.833668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.833703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.833828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.833862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.833981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.834014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.834253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.834288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.834497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.834531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.834772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.834804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.834931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.950 [2024-12-06 15:45:38.834965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.950 qpair failed and we were unable to recover it. 00:28:32.950 [2024-12-06 15:45:38.835083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.835116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.835356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.835416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.835524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.835558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.835741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.835775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.835902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.835935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.836052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.836086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.836288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.836322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.836568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.836602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.836729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.836763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.836884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.836918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.837099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.837132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.837334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.837375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.837565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.837599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.837717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.837750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.837989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.838021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.838132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.838164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.838377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.838413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.838596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.838629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.838747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.838780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.838984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.839017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.839136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.839168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.839391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.839427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.839534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.839567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.839751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.839784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.839988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.840021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.840276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.840309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.840436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.840471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.840588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.840623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.840882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.840916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.841022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.841060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.841250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.841283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.841488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.841523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.841658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.841691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.841833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.841866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.842037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.842070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.842208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.842242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.842437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.842471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.842591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.842624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.842811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.842845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.842974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.843008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.843186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.843220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.843433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.843468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.843591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.843624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.843753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.843788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.843972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.844005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.844245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.844279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.844460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.844494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.951 qpair failed and we were unable to recover it. 00:28:32.951 [2024-12-06 15:45:38.844732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.951 [2024-12-06 15:45:38.844766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.844910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.844944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.845126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.845159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.845269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.845302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.845415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.845449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.845628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.845662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.845928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.845961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.846091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.846125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.846296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.846329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.846546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.846581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.846784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.846820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.847001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.847033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.847221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.847253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.847364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.847417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.847532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.847566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.847740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.847773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.847903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.847937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.848126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.848159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.848358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.848403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.848588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.848621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.848736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.848770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.848954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.848988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.849179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.849218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.849342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.849383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.849511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.849544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.849726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.849759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.849898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.849932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.850105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.850137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.850326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.850358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.850551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.850586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.850724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.850757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.851017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.851051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.851156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.851190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.851436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.851470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.851760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.851795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.852057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.852091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.852235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.852268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.852522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.852556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.852794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.852827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.852944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.852979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.853087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.853120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.853251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.853283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.853393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.853427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.853673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.853706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.853914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.853947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.854123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.854157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.854281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.854315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.854438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.952 [2024-12-06 15:45:38.854473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.952 qpair failed and we were unable to recover it. 00:28:32.952 [2024-12-06 15:45:38.854664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.854698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.854834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.854869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.854992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.855024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.855225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.855258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.855449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.855485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.855614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.855647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.855901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.855934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.856138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.856171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.856411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.856445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.856577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.856610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.856820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.856854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.856984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.857018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.857194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.857228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.857430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.857465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.857682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.857721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.857940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.857975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.858099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.858134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.858248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.858282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.858470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.858504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.858747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.858779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.858899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.858931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.859042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.859076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.859245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.859278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.859403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.859438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.859577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.859609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.859814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.859847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.860034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.860068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.860252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.860285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.860494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.860528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.860718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.860753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.860943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.860978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.861154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.861187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.861319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.861351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.861628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.861663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.861843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.861876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.862056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.862090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.862217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.862251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.862444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.862478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.862742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.862774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.862953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.862986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.863228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.863260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.863416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.863451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.953 qpair failed and we were unable to recover it. 00:28:32.953 [2024-12-06 15:45:38.863715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.953 [2024-12-06 15:45:38.863748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.863939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.863972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.864096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.864129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.864238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.864269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.864462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.864496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.864681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.864714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.864890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.864924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.865187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.865220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.865410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.865445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.865648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.865681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.865811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.865846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.866026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.866059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.866238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.866277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.866466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.866500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.866696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.866728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.866967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.867000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.867124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.867157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.867358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.867401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.867536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.867568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.867700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.867734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.867943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.867975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.868106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.868139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.868265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.868298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.868488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.868524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.868704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.868736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.868932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.868966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.869088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.869122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.869295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.869329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:32.954 qpair failed and we were unable to recover it. 00:28:32.954 [2024-12-06 15:45:38.869514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.954 [2024-12-06 15:45:38.869548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.869760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.869794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.869923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.869957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.870220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.870253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.870497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.870532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.870647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.870680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.870866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.870899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.871125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.871158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.871413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.871448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.871691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.871725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.871841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.871875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.872008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.872042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.872155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.872188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.872410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.872481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.872751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.872787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.873015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.873049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.873190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.873222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.873386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.873422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.873533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.873565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.873694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.873728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.873863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.873895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.874078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.874110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.874390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.874426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.874738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.874769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.273 qpair failed and we were unable to recover it. 00:28:33.273 [2024-12-06 15:45:38.874974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.273 [2024-12-06 15:45:38.875016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.875188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.875222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.875359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.875404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.875644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.875677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.875802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.875834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.876075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.876108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.876351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.876395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.876614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.876647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.876903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.876935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.877176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.877208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.877399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.877434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.877617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.877650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.877833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.877866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.878042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.878075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.878339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.878384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.878494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.878527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.878790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.878822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.879076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.879109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.879233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.879265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.879453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.879486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.879672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.879704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.879971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.880003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.880124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.880156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.880417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.880450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.880633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.880667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.880915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.880947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.881064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.881097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.881438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.881510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.881710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.881750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.881998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.882035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.882289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.882324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.882608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.882644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.882823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.882856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.883028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.883062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.883198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.883233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.883388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.883427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.883716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.274 [2024-12-06 15:45:38.883753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.274 qpair failed and we were unable to recover it. 00:28:33.274 [2024-12-06 15:45:38.883942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.883976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.884161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.884200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.884331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.884382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.884655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.884706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.884988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.885021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.885210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.885243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.885429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.885465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.885662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.885700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.885944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.885985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.886240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.886275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.886413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.886448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.886640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.886675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.886804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.886839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.887109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.887143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.887392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.887430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.887578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.887615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.887746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.887785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.887989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.888030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.888222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.888256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.888381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.888421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.888603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.888638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.888939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.888976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.889167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.889208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.889464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.889504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.889692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.889728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.889920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.889955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.890147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.890181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.890334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.890377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.890620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.890655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.890773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.890812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.891070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.891105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.891297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.891331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.891515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.891549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.891675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.891714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.891929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.891969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.892154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.892191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.892389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.892426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.892673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.275 [2024-12-06 15:45:38.892710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.275 qpair failed and we were unable to recover it. 00:28:33.275 [2024-12-06 15:45:38.892937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.892972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.893212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.893247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.893454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.893498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.893643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.893686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.893807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.893840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.894111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.894145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.894340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.894406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.894671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.894707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.894841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.894880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.895061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.895096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.895298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.895334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.895599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.895636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.895905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.895941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.896178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.896211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.896404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.896439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.896633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.896668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.896852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.896886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.897087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.897121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.897294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.897327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.897458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.897492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.897688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.897722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.897903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.897935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.898057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.898090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.898294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.898326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.898527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.898561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.898753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.898787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.898983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.899015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.899199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.899231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.899386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.899421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.899554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.899588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.899790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.899824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.900009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.900042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.900216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.900255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.900446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.900480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.900723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.900757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.900942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.900975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.901166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.901199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.276 qpair failed and we were unable to recover it. 00:28:33.276 [2024-12-06 15:45:38.901394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.276 [2024-12-06 15:45:38.901428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.901687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.901720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.901907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.901940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.902192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.902227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.902359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.902399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.902640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.902674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.902928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.902961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.903068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.903101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.903231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.903264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.903393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.903427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.903700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.903734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.903902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.903935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.904109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.904143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.904326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.904358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.904604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.904637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.904827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.904860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.905041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.905074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.905251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.905283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.905470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.905505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.905705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.905739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.905876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.905908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.906094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.906127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.906318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.906352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.906548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.906582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.906768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.906800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.906969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.907002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.907208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.907241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.907385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.907420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.907662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.907696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.907869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.907902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.908084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.908116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.908232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.908265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.908456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.908491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.277 qpair failed and we were unable to recover it. 00:28:33.277 [2024-12-06 15:45:38.908660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.277 [2024-12-06 15:45:38.908693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.908908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.908941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.909155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.909195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.909320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.909354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.909567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.909601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.909790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.909824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.909945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.909978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.910146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.910180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.910310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.910343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.910560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.910593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.910770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.910804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.911043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.911076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.911259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.911292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.911473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.911508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.911695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.911729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.911862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.911894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.912016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.912050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.912298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.912330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.912471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.912506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.912612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.912645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.912815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.912848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.912974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.913007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.913244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.913278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.913395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.913429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.913557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.913590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.913765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.913799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.914016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.914049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.914341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.914383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.914647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.914679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.914822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.914855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.915103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.915136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.915331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.915364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.915583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.915617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.915808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.915840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.916014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.916047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.916182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.916215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.278 [2024-12-06 15:45:38.916402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.278 [2024-12-06 15:45:38.916437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.278 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.916631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.916665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.916795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.916828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.917013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.917046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.917176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.917210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.917403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.917437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.917614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.917652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.917774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.917808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.917981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.918133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.918166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.918410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.918444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.918566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.918598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.918791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.918824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.919004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.919037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.919226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.919260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.919442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.919476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.919667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.919700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.919937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.919970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.920170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.920203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.920493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.920542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.920661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.920694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.920816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.920850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.920969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.921001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.921213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.921246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.921486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.921519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.921757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.921790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.921919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.921953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.922087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.922119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.922237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.922270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.922509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.922543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.922667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.922701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.922968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.923001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.923133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.923167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.923277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.923311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.923516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.923550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.923672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.923704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.923893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.923926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.279 [2024-12-06 15:45:38.924033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.279 [2024-12-06 15:45:38.924066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.279 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.924304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.924337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.924544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.924579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.924792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.924824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.925039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.925072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.925253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.925286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.925469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.925504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.925631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.925663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.925767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.925801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.925996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.926034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.926232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.926268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.926534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.926569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.926760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.926794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.926934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.926967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.927097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.927131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.927377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.927411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.927588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.927623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.927812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.927845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.928086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.928120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.928360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.928415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.928680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.928713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.928903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.928937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.929120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.929153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.929281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.929315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.929499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.929534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.929724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.929757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.929934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.929968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.930248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.930282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.930415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.930451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.930665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.930699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.930821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.930854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.930995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.931029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.931220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.931253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.931428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.931462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.931644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.931677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.931796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.931829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.932020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.932053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.932239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.280 [2024-12-06 15:45:38.932273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.280 qpair failed and we were unable to recover it. 00:28:33.280 [2024-12-06 15:45:38.932528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.932564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.932758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.932792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.932966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.932999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.933195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.933229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.933348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.933390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.933595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.933629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.933773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.933807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.933983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.934016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.934224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.934257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.934435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.934469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.934724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.934758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.934976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.935015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.935205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.935239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.935426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.935461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.935650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.935684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.935892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.935927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.936054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.936088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.936262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.936296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.936521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.936556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.936742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.936775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.936943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.936976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.937157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.937191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.937429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.937465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.937654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.937687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.937891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.937924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.938038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.938072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.938246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.938280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.938469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.938505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.938700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.938734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.938981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.939014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.939198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.939232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.939375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.939409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.939655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.939690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.939864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.939898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.940028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.940062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.940182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.940216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.940470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.940505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.281 [2024-12-06 15:45:38.940697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.281 [2024-12-06 15:45:38.940730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.281 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.940857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.940891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.941003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.941037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.941279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.941312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.941609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.941644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.941771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.941805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.941998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.942032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.942316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.942349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.942603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.942637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.942824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.942857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.943055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.943089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.943332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.943366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.943666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.943700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.943898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.943932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.944193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.944232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.944490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.944526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.944702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.944736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.944998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.945032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.945163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.945196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.945327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.945360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.945633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.945667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.945838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.945871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.946068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.946101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.946229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.946262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.946501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.946536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.946650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.946684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.946945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.946979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.947160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.947193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.947385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.947421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.947623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.947657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.947921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.947954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.948139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.948173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.948377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.948413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.948540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.282 [2024-12-06 15:45:38.948574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.282 qpair failed and we were unable to recover it. 00:28:33.282 [2024-12-06 15:45:38.948762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.948796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.948977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.949011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.949145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.949177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.949354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.949401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.949539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.949573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.949689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.949724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.949909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.949943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.950085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.950120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.950357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.950411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.950663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.950697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.950904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.950938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.951076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.951109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.951243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.951276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.951474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.951511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.951687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.951720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.951925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.951958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.952154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.952191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.952332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.952364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.952567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.952604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.952836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.952910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.953121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.953169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.953379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.953414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.953654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.953686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.953815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.953849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.954089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.954121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.954303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.954335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.954541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.954576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.954782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.954814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.954999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.955033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.955317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.955350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.955539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.955573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.955745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.955778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.955968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.956001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.956187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.956219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.956412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.956448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.956575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.956608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.956714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.956747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.283 [2024-12-06 15:45:38.956873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.283 [2024-12-06 15:45:38.956905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.283 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.957080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.957113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.957350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.957390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.957645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.957678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.957862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.957896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.958157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.958188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.958442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.958477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.958614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.958647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.958854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.958887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.959132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.959165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.959350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.959393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.959601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.959634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.959902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.959935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.960216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.960248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.960437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.960471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.960683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.960715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.960900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.960933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.961123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.961156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.961354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.961395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.961596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.961628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.961744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.961777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.961920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.961953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.962058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.962090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.962194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.962232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.962427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.962462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.962639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.962672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.962871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.962905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.963075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.963107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.963235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.963268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.963391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.963425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.963572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.963606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.963738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.963771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.963952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.963985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.964100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.964135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.964416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.964451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.964575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.964607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.964794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.964828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.284 [2024-12-06 15:45:38.965030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.284 [2024-12-06 15:45:38.965064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.284 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.965327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.965360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.965483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.965516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.965631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.965664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.965852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.965885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.966073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.966105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.966233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.966266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.966386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.966421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.966617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.966649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.966822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.966854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.967114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.967147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.967386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.967420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.967602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.967635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.967823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.967898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.968092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.968130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.968237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.968271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.968400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.968435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.968623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.968657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.968849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.968882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.969057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.969089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.969234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.969266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.969509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.969543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.969788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.969821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.969995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.970028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.970150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.970182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.970390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.970426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.970550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.970592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.970808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.970841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.971010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.971043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.971164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.971195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.971386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.971421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.971625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.971657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.971902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.971935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.972122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.972155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.972399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.972433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.972701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.972733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.972851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.972883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.973079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.973112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.285 qpair failed and we were unable to recover it. 00:28:33.285 [2024-12-06 15:45:38.973292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.285 [2024-12-06 15:45:38.973325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.973531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.973565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.973814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.973846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.974033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.974065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.974268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.974301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.974431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.974466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.974659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.974692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.974864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.974897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.975164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.975196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.975387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.975421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.975677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.975710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.975840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.975872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.975981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.976015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.976126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.976159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.976353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.976395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.976645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.976680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.976783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.976817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.976987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.977019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.977139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.977172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.977345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.977388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.977568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.977602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.977716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.977749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.977928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.977962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.978131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.978163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.978292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.978325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.978453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.978487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.978674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.978707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.978969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.979002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.979178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.979216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.979407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.979441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.979697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.979729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.979923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.979956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.980080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.980113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.980295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.980327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.980450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.980484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.980656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.980689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.980823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.980856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.981026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.286 [2024-12-06 15:45:38.981059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.286 qpair failed and we were unable to recover it. 00:28:33.286 [2024-12-06 15:45:38.981245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.981278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.981486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.981520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.981627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.981659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.981772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.981803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.981934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.981967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.982146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.982178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.982415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.982449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.982637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.982669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.982798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.982831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.983007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.983039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.983230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.983262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.983446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.983480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.983662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.983694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.983826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.983859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.984056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.984089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.984207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.984239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.984426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.984460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.984647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.984720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.984875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.984913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.985083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.985116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.985285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.985319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.985451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.985485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.985728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.985761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.986025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.986057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.986263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.986296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.986414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.986448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.986661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.986693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.986824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.986856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.986973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.987005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.987190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.987222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.987432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.987467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.987685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.987718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.987839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.987872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.988042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.287 [2024-12-06 15:45:38.988075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.287 qpair failed and we were unable to recover it. 00:28:33.287 [2024-12-06 15:45:38.988282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.988314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.988493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.988527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.988707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.988740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.988862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.988894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.989131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.989164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.989355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.989397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.989505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.989538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.989668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.989701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.989878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.989912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.990128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.990161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.990402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.990442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.990553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.990587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.990848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.990881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.991016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.991049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.991250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.991284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.991408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.991443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.991681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.991713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.991974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.992008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.992180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.992214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.992488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.992521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.992777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.992810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.992978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.993011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.993225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.993258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.993435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.993469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.993610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.993643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.993895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.993928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.994118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.994151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.994413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.994447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.994631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.994665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.994791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.994824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.995026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.995060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.995326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.995359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.995645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.995678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.995848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.995880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.996015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.996047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.996242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.996275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.996408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.996443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.996682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.288 [2024-12-06 15:45:38.996720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.288 qpair failed and we were unable to recover it. 00:28:33.288 [2024-12-06 15:45:38.996901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.996934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.997142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.997175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.997358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.997401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.997520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.997552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.997730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.997763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.998027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.998059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.998191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.998224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.998395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.998429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.998622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.998655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.998892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.998924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.999096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.999129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.999416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.999450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.999713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.999745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:38.999961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:38.999995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.000168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.000201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.000390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.000424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.000615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.000648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.000843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.000876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.001059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.001091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.001303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.001336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.001531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.001566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.001753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.001786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.002026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.002060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.002243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.002276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.002411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.002445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.002685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.002718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.002949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.002987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.003103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.003135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.003242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.003274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.003540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.003574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.003703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.003735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.003998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.004030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.004234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.004267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.004454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.004488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.004621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.004654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.004826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.004859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.004988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.005020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.005280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.005313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.289 [2024-12-06 15:45:39.005565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.289 [2024-12-06 15:45:39.005599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.289 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.005712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.005745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.005986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.006058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.006353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.006404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.006651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.006685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.006932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.006966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.007158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.007190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.007477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.007512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.007647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.007680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.007942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.007974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.008214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.008248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.008381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.008417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.008616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.008648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.008760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.008792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.009057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.009090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.009358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.009412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.009690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.009723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.010016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.010050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.010186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.010219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.010391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.010425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.010553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.010586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.010759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.010793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.011034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.011067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.011192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.011225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.011417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.011451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.011727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.011760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.011970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.012003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.012187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.012220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.012349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.012391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.012595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.012628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.012799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.012832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.013010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.013043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.013150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.013182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.013379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.013415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.013617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.013650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.013885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.013918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.014090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.014123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.014305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.290 [2024-12-06 15:45:39.014338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.290 qpair failed and we were unable to recover it. 00:28:33.290 [2024-12-06 15:45:39.014526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.014561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.014748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.014781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.014897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.014930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.015098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.015131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.015425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.015460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.015630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.015664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.015849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.015882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.016071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.016104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.016263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.016473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.016507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.016681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.016714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.016926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.016958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.017088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.017121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.017309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.017342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.017546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.017580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.017843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.017875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.018053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.018086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.018214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.018253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.018431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.018465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.018661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.018695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.018879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.018911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.019148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.019180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.019407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.019443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.019615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.019648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.019854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.019887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.020062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.020095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.020230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.020263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.020448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.020483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.020592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.020626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.020795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.020827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.021096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.291 [2024-12-06 15:45:39.021129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.291 qpair failed and we were unable to recover it. 00:28:33.291 [2024-12-06 15:45:39.021276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.021310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.021513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.021547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.021730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.021763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.021882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.021915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.022039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.022071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.022272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.022304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.022523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.022558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.022805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.022839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.023023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.023056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.023167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.023201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.023388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.023422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.023613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.023647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.023829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.023862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.024131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.024165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.024336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.024378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.024528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.024563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.024802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.024835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.024950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.024982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.025218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.025251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.025377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.025530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.025804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.025837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.026099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.026132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.026308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.026340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.026549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.026582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.026764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.026798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.026983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.027022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.027265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.027298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.027482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.027517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.027705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.027738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.028000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.028032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.028161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.028194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.028459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.028492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.028618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.028651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.028834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.028867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.029054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.029086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.292 [2024-12-06 15:45:39.029214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.292 [2024-12-06 15:45:39.029247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.292 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.029361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.029403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.029586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.029619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.029736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.029768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.029950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.029983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.030168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.030202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.030453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.030487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.030669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.030702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.030940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.030973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.031244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.031277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.031403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.031437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.031562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.031596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.031712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.031744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.032011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.032044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.032249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.032281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.032547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.032582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.032721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.032754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3170447 Killed "${NVMF_APP[@]}" "$@" 00:28:33.293 [2024-12-06 15:45:39.033090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.033163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.033439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.033480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.033739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:33.293 [2024-12-06 15:45:39.033986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.034020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.293 [2024-12-06 15:45:39.034280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.034312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.293 [2024-12-06 15:45:39.034517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.034551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.293 [2024-12-06 15:45:39.034792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.034826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.035006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.035037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.035167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.035200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.035445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.035479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.035721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.035753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.036004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.036037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.036289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.036322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.036521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.036556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.036747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.036781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.036959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.036992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.037169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.037201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.037387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.037422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.037596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.293 [2024-12-06 15:45:39.037629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.293 qpair failed and we were unable to recover it. 00:28:33.293 [2024-12-06 15:45:39.037745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.037778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.037968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.038000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.038120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.038153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.038337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.038382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.038525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.038560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.038774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.038806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.039013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.039044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.039215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.039247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.039486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.039522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.039786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.039821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.039948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.039983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.040182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.040215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.040351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.040402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.040636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.040669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.040784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.040817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.041028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.041060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.041195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.041228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.041426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.041460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.041589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.041625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3171169 00:28:33.294 [2024-12-06 15:45:39.041795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.041869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3171169 00:28:33.294 [2024-12-06 15:45:39.042022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.042059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:33.294 [2024-12-06 15:45:39.042248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.042283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3171169 ']' 00:28:33.294 [2024-12-06 15:45:39.042463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.042498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.294 [2024-12-06 15:45:39.042775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.042808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.042926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.042960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.043156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.043189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.294 [2024-12-06 15:45:39.043391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.043426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.294 [2024-12-06 15:45:39.043684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.043717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.294 [2024-12-06 15:45:39.043913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.043951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.044141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.044175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.044284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.044317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.044532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.044568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.294 [2024-12-06 15:45:39.044832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.294 [2024-12-06 15:45:39.044867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.294 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.045110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.045157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.045304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.045341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.045539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.045578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.045858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.045897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.046100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.046133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.046273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.046308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.046541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.046576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.046758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.046795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.046989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.047030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.047229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.047266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.047531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.047567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.047745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.047778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.047896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.047931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.048109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.048144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.048272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.048305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.048455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.048492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.048598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.048631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.048837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.048871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.049170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.049204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.049467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.049503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.049617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.049648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.049855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.049887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.050113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.050152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.050392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.050426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.050704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.050738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.051004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.051037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.051231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.051268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.051385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.051420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.051629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.051665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.051871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.051905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.052103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.052138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.052244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.052277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.052405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.052440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.052575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.052609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.052710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.052744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.052872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.052916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.053113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.053148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.295 qpair failed and we were unable to recover it. 00:28:33.295 [2024-12-06 15:45:39.053346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.295 [2024-12-06 15:45:39.053395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.053818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.053860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.054108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.054180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.054390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.054428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.054662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.054700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.054972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.055006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.055192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.055226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.055413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.055447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.055708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.055741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.055867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.055901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.056010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.056043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.056234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.056268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.056412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.056693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.056728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.056849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.056881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.057077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.057110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.057303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.057338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.057555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.057599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.057739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.057774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.057951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.057985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.058171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.058209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.058392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.058426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.058553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.058589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.058775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.058809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.059005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.059039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.059143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.059184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.059445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.059486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.059753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.059793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.059927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.059959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.060091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.060134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.060318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.060351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.060603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.060636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.060810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.060844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.061025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.296 [2024-12-06 15:45:39.061061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.296 qpair failed and we were unable to recover it. 00:28:33.296 [2024-12-06 15:45:39.061248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.061284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.061529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.061569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.061683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.061716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.061837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.061869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.061974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.062006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.062204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.062477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.062512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.062650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.062693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.062823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.062857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.063120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.063156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.063293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.063327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.063588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.063624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.063838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.063871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.063984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.064018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.064209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.064247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.064430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.064465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.064570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.064604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.064789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.064824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.065022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.065062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.065236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.065269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.065585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.065625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.065830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.065865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.066067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.066101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.066235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.066270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.066413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.066451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.066690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.066725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.066987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.067021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.067269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.067307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.067532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.067569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.067698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.067732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.067972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.068007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.068195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.068227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.068517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.068552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.068710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.068750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.068940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.068977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.069166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.069200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.069467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.069501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.069615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.297 [2024-12-06 15:45:39.069648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.297 qpair failed and we were unable to recover it. 00:28:33.297 [2024-12-06 15:45:39.069781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.069814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.070008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.070043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.070168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.070204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.070330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.070365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.070550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.070583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.070702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.070737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.070847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.070881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.071071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.071103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.071281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.071313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.071591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.071626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.071870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.071904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.072021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.072054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.072249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.072283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.072471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.072505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.072626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.072659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.072761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.072795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.072920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.072953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.073082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.073114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.073234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.073271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.073525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.073561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.073678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.073712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.073939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.074011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.074155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.074191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.074385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.074421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.074678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.074711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.074844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.074876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.075058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.075093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.075355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.075402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.075655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.075688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.075823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.075857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.075992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.076025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.076267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.076300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.076427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.076462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.076581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.076614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.076792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.076834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.077024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.077058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.077263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.077296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.077478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.298 [2024-12-06 15:45:39.077512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.298 qpair failed and we were unable to recover it. 00:28:33.298 [2024-12-06 15:45:39.077752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.077785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.077924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.077958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.078199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.078233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.078428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.078463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.078653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.078687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.078801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.078835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.079075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.079110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.079293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.079325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.079514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.079548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.079751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.079786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.080042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.080075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.080285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.080318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.080559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.080595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.080723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.080756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.080933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.080966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.081096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.081129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.081246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.081279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.081467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.081501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.081673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.081709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.081949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.081983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.082189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.082222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.082350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.082392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.082584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.082618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.082766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.082845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.082985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.083022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.083211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.083244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.083430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.083465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.083659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.083692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.083814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.083847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.084029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.084064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.084222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.084254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.084506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.084541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.084652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.084685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.084866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.084899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.085035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.085068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.085296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.085328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.085516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.085560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.299 [2024-12-06 15:45:39.085740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.299 [2024-12-06 15:45:39.085772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.299 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.085880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.085911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.086019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.086052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.086225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.086257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.086525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.086559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.086766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.086798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.087097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.087131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.087319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.087352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.087656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.087689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.087799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.087833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.088032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.088065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.088267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.088300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.088557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.088592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.088720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.088755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.088928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.088960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.089202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.089235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.089497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.089532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.089647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.089680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.089946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.089979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.090102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.090134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.090311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.090343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.090490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.090524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.090713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.090746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.090853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.090884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.091005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.091040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.091181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.091214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.091508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.091582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.091780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.091816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.092017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.092051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 [2024-12-06 15:45:39.092051] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.092092] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.300 [2024-12-06 15:45:39.092238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.092269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.092471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.092505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.092705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.092736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.093000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.093032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.093227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.093260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.093381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.093415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.093540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.093573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.093690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.300 [2024-12-06 15:45:39.093724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.300 qpair failed and we were unable to recover it. 00:28:33.300 [2024-12-06 15:45:39.093837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.093872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.094070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.094122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.094363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.094406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.094579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.094614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.094739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.094772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.095048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.095080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.095214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.095248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.095360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.095430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.095609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.095642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.095816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.095849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.096043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.096077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.096189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.096221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.096460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.096494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.096734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.096768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.096946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.096981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.097187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.097220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.097401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.097435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.097564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.097597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.097868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.097901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.098004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.098038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.098241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.098276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.098402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.098437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.098645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.098678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.098786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.098818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.099024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.099056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.099330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.099364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.099567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.099603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.099735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.099769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.099894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.099927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.100057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.100090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.100310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.100344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.100481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.100514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.100720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.100753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.301 [2024-12-06 15:45:39.101020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.301 [2024-12-06 15:45:39.101053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.301 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.101160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.101193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.101364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.101409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.101600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.101634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.101878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.101911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.102104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.102137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.102316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.102350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.102579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.102613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.102748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.102786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.102920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.102953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.103132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.103166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.103365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.103430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.103694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.103727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.103911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.103944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.104137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.104170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.104452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.104486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.104668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.104701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.104825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.104859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.104989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.105023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.105196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.105229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.105359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.105403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.105596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.105630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.105818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.105853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.106026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.106059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.106303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.106337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.106534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.106570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.106775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.106809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.106986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.107020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.107258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.107290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.107475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.107508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.107686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.107718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.107844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.107877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.108050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.108082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.108331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.108363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.108503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.108536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.108721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.108754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.108966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.108998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.302 [2024-12-06 15:45:39.109244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.302 [2024-12-06 15:45:39.109276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.302 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.109449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.109482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.109664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.109696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.109875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.109909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.110083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.110118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.110320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.110352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.110573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.110607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.110843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.110875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.110989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.111021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.111201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.111234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.111335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.111381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.111617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.111664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.111840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.111871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.112047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.112079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.112262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.112297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.112584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.112619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.112757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.112789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.112983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.113015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.113139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.113171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.113358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.113399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.113580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.113612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.113896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.113929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.114138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.114170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.114347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.114389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.114602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.114636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.114815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.114849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.114982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.115014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.115204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.115236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.115348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.115398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.115656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.115691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.115872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.115906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.116104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.116137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.116386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.116420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.116610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.116644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.116885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.303 [2024-12-06 15:45:39.116918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.303 qpair failed and we were unable to recover it. 00:28:33.303 [2024-12-06 15:45:39.117054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.117086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.117275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.117307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.117508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.117543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.117743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.117775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.117949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.117981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.118105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.118137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.118328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.118361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.118560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.118594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.118728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.118760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.118962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.118995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.119270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.119304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.119494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.119529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.119635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.119667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.119852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.119886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.120134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.120168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.120405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.120440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.120661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.120700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.120842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.120874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.121007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.121041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.121174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.121207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.121466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.121501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.121676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.121710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.121899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.121932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.122120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.122153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.122329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.122362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.122600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.122635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.122753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.122787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.122970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.123004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.123248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.123282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.123593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.304 [2024-12-06 15:45:39.123629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.304 qpair failed and we were unable to recover it. 00:28:33.304 [2024-12-06 15:45:39.123823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.123857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.123962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.123995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.124248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.124282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.124463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.124496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.124669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.124702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.124825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.124859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.124976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.125009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.125263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.125296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.125427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.125462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.125653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.125687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.125947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.125979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.126247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.126283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.126468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.126501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.126695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.126736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.126933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.126966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.127099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.127131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.127385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.127419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.127658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.127691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.127823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.127857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.128043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.128075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.128337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.128380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.128503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.128536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.128777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.128811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.129002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.129036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.129164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.129199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.129389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.129424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.129558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.129604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.129708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.129741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.129940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.129973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.130098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.130133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.130328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.130363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.130477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.130511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.130613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.130646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.130780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.130815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.130936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.130969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.131087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.305 [2024-12-06 15:45:39.131123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.305 qpair failed and we were unable to recover it. 00:28:33.305 [2024-12-06 15:45:39.131297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.131330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.131532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.131566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.131831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.131865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.131986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.132019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.132207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.132240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.132348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.132391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.132506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.132538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.132789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.132825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.132999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.133033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.133312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.133344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.133543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.133579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.133708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.133743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.134009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.134041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.134238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.134271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.134480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.134516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.134704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.134737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.134861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.134895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.135022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.135074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.135337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.135387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.135576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.135611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.135787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.135823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.135936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.135979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.136090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.136122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.136364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.136407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.136529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.136565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.136684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.136717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.136847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.136882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.137073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.137118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.137366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.137414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.137731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.137769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.137947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.137984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.138106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.138141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.138361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.138411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.306 [2024-12-06 15:45:39.138634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.306 [2024-12-06 15:45:39.138667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.306 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.138778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.138816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.138986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.139021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.139260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.139293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.139470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.139506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.139686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.139720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.139903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.139940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.140062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.140096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.140280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.140316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.140543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.140576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.140819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.140853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.141035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.141076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.141269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.141301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.141421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.141465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.141658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.141694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.141829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.141862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.141980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.142016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.142207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.142243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.142441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.142481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.142723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.142761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.142912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.142949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.143148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.143182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.143448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.143484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.143654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.143693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.143884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.143919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.144060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.144098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.144275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.144308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.144486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.144524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.144700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.144733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.144872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.144910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.145156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.145200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.145416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.145453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.145639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.145674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.145919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.145953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.146136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.146169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.146285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.146318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.307 [2024-12-06 15:45:39.146641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.307 [2024-12-06 15:45:39.146676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.307 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.146866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.146903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.147090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.147129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.147345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.147395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.147521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.147557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.147801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.147838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.148107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.148143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.148276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.148313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.148581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.148616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.148740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.148774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.148892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.148925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.149048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.149082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.149286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.149321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.149506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.149540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.149717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.149751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.150011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.150046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.150183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.150223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.150465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.150500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.150756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.150789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.150924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.150958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.151148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.151182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.151311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.151345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.151473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.151507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.151613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.151645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.151841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.151876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.152131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.152164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.152300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.152335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.152477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.152512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.152685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.152723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.152953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.153001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.153121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.153153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.153338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.153382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.153518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.153551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.153754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.153789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.154052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.154086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.154267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.308 [2024-12-06 15:45:39.154301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.308 qpair failed and we were unable to recover it. 00:28:33.308 [2024-12-06 15:45:39.154545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.154580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.154757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.154790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.155008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.155042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.155214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.155249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.155515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.155550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.155737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.155771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.155895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.155929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.156056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.156090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.156208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.156242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.156415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.156449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.156620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.156652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.156765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.156799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.156995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.157029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.157166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.157200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.157382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.157418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.157603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.157637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.157807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.157842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.157975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.158008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.158137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.158171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.158275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.158314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.158523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.158561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.158817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.158850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.158961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.158994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.159200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.159234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.159424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.159459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.159702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.159736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.160044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.160078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.160204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.160237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.160351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.160393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.160582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.160614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.160854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.160888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.160993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.161026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.161209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.161243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.161428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.161464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.161663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.161709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.161927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.161962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.162143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.162177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.162467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.309 [2024-12-06 15:45:39.162504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.309 qpair failed and we were unable to recover it. 00:28:33.309 [2024-12-06 15:45:39.162757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.162790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.163006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.163039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.163244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.163277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.163515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.163550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.163788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.163821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.164084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.164118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.164244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.164277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.164485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.164518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.164697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.164730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.164903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.164943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.165081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.165115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.165307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.165341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.165492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.165525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.165635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.165669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.165965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.165998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.166135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.166167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.166349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.166395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.166575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.166609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.166853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.166886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.167022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.167055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.167248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.167281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.167396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.167430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.167566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.167600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.167814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.167849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.168035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.168069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.168281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.168316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.168592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.168631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.168777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.168810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.168990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.169023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.169220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.169253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.169377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.169412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.169539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.169573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.169763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.310 [2024-12-06 15:45:39.169800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.310 qpair failed and we were unable to recover it. 00:28:33.310 [2024-12-06 15:45:39.170011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.170045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.170275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.170308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.170422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.170459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.170613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.170652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.170764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.170797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.170909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.170942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.171126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.171159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.171378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.171413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.171591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.171624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.171761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.171794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.171923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.171956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.172211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.172243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.172364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.172414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.172591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.172626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.172806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.172839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.173035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.173067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.173260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.173294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.173478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.173514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.173754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.173788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.173910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.173921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.311 [2024-12-06 15:45:39.173945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.174080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.174112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.174244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.174279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.174404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.174437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.174615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.174648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.174839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.174872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.175002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.175035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.175219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.175254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.175382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.175418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.175533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.175567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.175743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.175776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.175968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.176002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.176117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.176152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.176334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.176377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.176565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.176600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.176728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.176761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.176961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.176995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.177186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.311 [2024-12-06 15:45:39.177219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.311 qpair failed and we were unable to recover it. 00:28:33.311 [2024-12-06 15:45:39.177326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.177359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.177584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.177618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.177754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.177789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.177928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.177962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.178212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.178244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.178347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.178391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.178584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.178617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.178830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.178865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.178986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.179020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.179210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.179245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.179386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.179420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.179537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.179571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.179768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.179801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.179922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.179956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.180079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.180112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.180293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.180328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.180476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.180511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.180626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.180660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.180777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.180813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.180950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.180991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.181176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.181211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.181457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.181495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.181670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.181703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.181848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.181882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.182067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.182101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.182207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.182240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.182386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.182423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.182542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.182576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.182769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.182804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.182999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.183033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.183159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.183193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.183312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.183345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.183486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.183520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.183659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.183697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.183872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.183907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.184041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.184077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.184184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.184218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.184354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.184396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.184523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.184558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.184821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.184857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.312 qpair failed and we were unable to recover it. 00:28:33.312 [2024-12-06 15:45:39.184969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.312 [2024-12-06 15:45:39.185005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.185189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.185224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.185396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.185431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.185569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.185602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.185802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.185836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.185965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.185999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.186207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.186240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.186340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.186382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.186501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.186537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.186714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.186748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.186938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.186973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.187100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.187134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.187308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.187341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.187479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.187512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.187686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.187718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.187893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.187926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.188182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.188215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.188410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.188446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.188768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.188967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.189006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.189127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.189160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.189285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.189318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.189506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.189542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.189730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.189763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.189937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.189970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.190082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.190115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.190324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.190358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.190498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.190532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.190656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.190688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.190821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.190855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.191099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.191134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.191425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.191460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.191638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.191671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.191802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.191835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.192018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.192051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.192181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.192214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.192418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.192451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.192651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.192686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.192814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.192846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.192969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.193005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.313 [2024-12-06 15:45:39.193252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.313 [2024-12-06 15:45:39.193285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.313 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.193429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.193463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.193581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.193614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.193803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.193836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.194023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.194057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.194179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.194212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.194393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.194427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.194541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.194574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.194772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.194806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.194981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.195015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.195198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.195232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.195523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.195557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.195746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.195781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.195915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.195950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.196133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.196165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.196295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.196328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.196481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.196516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.196699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.196733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.196918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.196950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.197125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.197166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.197318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.197353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.197560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.197596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.197728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.197762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.197886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.197921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.198040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.198076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.198198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.198231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.198382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.198419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.198534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.198567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.198672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.198705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.198829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.198861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.199002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.199036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.199214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.199248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.199489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.199523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.199649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.199684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.199862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.199897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.200016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.200050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.200246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.200280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.200396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.200432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.200554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.200589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.200831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.200864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.201046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.201081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.314 qpair failed and we were unable to recover it. 00:28:33.314 [2024-12-06 15:45:39.201183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.314 [2024-12-06 15:45:39.201218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.201457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.201492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.201617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.201650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.201770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.201803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.201928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.201963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.202158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.202192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.202332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.202366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.202553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.202586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.202774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.202807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.202918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.202951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.203191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.203226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.203347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.203392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.203595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.203630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.203821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.203855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.204071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.204105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.204349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.204412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.204540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.204574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.204698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.204732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.204843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.204887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.205140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.205173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.206604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.206665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.206813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.206870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.207061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.207280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.207318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.207535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.207574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.207750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.207786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.207923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.207955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.208145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.208177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.208394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.208429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.208558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.208593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.208790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.208827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.208966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.209008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.209207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.209241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.209481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.209519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.209650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.209683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.209805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.209839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.210037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.210071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.210257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.210290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.315 [2024-12-06 15:45:39.210413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.315 [2024-12-06 15:45:39.210448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.315 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.210597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.210634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.210839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.210872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.210992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.211032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.211147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.211179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.211298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.211331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.211546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.211580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.211754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.211792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.211911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.211944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.212238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.212272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.212411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.212455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.212632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.212669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.212859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.212894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.213002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.213034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.213146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.213180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.213307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.213341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.213551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.213589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.213792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.213828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.214025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.214064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.214281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.214320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.214460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.214496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.214629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.214667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.214770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.214804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.214976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.215011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.215138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.215178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.215424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.215465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.215545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.316 [2024-12-06 15:45:39.215574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.316 [2024-12-06 15:45:39.215581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.316 [2024-12-06 15:45:39.215588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.316 [2024-12-06 15:45:39.215594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.316 [2024-12-06 15:45:39.215652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.215686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.215885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.215927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.216182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.216216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.216331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.216365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.216564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.216598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.216714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.216752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.216880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.216930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.217127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.217162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.217127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:33.316 [2024-12-06 15:45:39.217236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:33.316 [2024-12-06 15:45:39.217335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.217343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:33.316 [2024-12-06 15:45:39.217386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.217344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:33.316 [2024-12-06 15:45:39.217596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.217629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.217759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.217792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.217962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.217999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.218126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.218161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.218337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.218386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.218541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.218585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.218715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.316 [2024-12-06 15:45:39.218750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.316 qpair failed and we were unable to recover it. 00:28:33.316 [2024-12-06 15:45:39.218938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.218978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.219087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.219119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.219252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.219285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.219465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.219500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.219627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.219661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.219853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.219888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.220060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.220096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.220280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.220318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.220455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.220492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.220621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.220654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.220922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.220957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.221086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.221120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.221322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.221356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.221484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.221519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.221733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.221768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.221958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.221997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.222175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.222215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.222337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.222380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.222490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.222524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.222720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.222754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.222940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.222974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.223234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.223270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.223540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.223576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.223684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.223719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.223845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.223879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.224013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.224047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.224170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.224204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.224395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.224431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.224540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.224575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.224755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.224790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.224977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.225011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.225190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.225224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.225336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.225377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.225554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.225590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.225717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.225751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.225862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.225897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.226082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.226115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.226231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.226266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.226377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.226415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.226591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.226626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.226837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.226870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.227048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.317 [2024-12-06 15:45:39.227083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.317 qpair failed and we were unable to recover it. 00:28:33.317 [2024-12-06 15:45:39.227211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.227245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.227359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.227407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.227535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.227569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.227685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.227720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.227836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.227871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.228042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.228077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.228177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.228385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.228436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.228641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.228677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.228848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.228883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.229069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.229103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.229221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.229255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.229440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.229475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.229722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.229754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.229940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.229974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.230167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.230202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.230398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.230432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.230548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.230583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.230704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.230737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.231001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.231041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.231173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.231207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.231336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.231381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.231492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.231526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.231711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.231749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.231885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.231920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.232104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.232144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.232337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.232381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.232578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.232618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.232761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.232806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.232925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.232960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.233180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.233223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.233444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.233481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.233610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.233646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.233838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.233877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.234021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.234057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.234248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.234283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.234405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.234441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.234551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.234587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.234700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.234737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.234862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.234902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.235083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.235120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.235229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.235264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.318 qpair failed and we were unable to recover it. 00:28:33.318 [2024-12-06 15:45:39.235415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.318 [2024-12-06 15:45:39.235453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.235560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.235593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.235765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.235800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.235940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.235973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.236090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.236133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.236245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.236281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.236468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.236514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.236642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.236675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.236821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.236856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.236967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.237002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.237108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.237143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.319 [2024-12-06 15:45:39.237267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.319 [2024-12-06 15:45:39.237302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.319 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.237513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.237549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.237673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.237712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.237839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.237874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.237997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.238034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.238163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.238197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.238470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.238556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.238739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.238814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.238965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.239010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.239133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.239168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.239376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.239411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.239544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.239580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.239825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.239859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.240043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.240078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.240260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.240295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.240421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.240465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.240599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.240633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.240812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.240846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.241028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.241062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.241189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.241222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.241463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.241498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.241680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.241714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.241855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.241888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.242068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.242103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.242377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.242412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.242540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.242574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.242755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.242790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.242970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.243005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.243143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.243176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.243439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.243473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.243583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.243616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.243801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.243834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.244016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.244048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.593 [2024-12-06 15:45:39.244238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.593 [2024-12-06 15:45:39.244272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.593 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.244391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.244426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.244678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.244713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.244843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.244877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.245011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.245045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.245166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.245198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.245353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.245393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.245600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.245633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.245757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.245791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.245898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.245930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.246059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.246093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.246203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.246237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.246359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.246399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.246599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.246633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.246758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.246791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.246906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.246940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.247125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.247158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.247330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.247364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.247481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.247514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.247714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.247750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.247866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.247900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.248007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.248041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.248229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.248264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.248476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.248520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.248641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.248674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.248844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.248878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.249118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.249152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.249257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.249292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.249471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.249507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.249612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.249645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.249767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.249800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.249913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.249946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.250072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.250104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.250219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.250254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.250440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.250474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.250577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.250610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.250728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.250778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.250959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.250994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.594 qpair failed and we were unable to recover it. 00:28:33.594 [2024-12-06 15:45:39.251104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.594 [2024-12-06 15:45:39.251136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.251244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.251277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.251409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.251443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.251552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.251585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.251779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.251812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.251923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.251958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.252061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.252094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.252205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.252241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.252365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.252410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.252613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.252649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.252771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.252807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.252916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.252953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.253104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.253139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.253270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.253304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.253417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.253451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.253643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.253679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.253808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.253842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.253972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.254004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.254125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.254160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.254281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.254315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.254447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.254482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.254589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.254622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.254745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.254777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.254966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.254999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.255116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.255151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.255274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.255323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.255454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.255489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.255604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.255639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.255816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.255850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.255974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.256007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.256125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.256159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.256341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.256391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.256501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.256535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.256648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.256682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.256790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.256825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.256957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.256992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.257096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.257130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.595 [2024-12-06 15:45:39.257244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.595 [2024-12-06 15:45:39.257278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.595 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.257454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.257496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.257601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.257635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.257743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.257777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.257900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.257935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.258050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.258084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.258267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.258302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.258431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.258469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.258578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.258611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.258794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.258828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.258942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.258976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.259099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.259133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.259314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.259349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.259478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.259513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.259660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.259834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.259869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.260046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.260080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.260208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.260241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.260427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.260461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.260577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.260611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.260801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.260835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.261068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.261101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.261221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.261256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.261381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.261417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.261543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.261579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.261702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.261737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.261846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.261881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.261998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.262033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.262169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.262209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.262321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.262356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.262493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.262528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.262659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.262694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.262807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.262842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.262971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.263007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.263252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.263289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.263463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.263501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.263630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.263665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.263784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.596 [2024-12-06 15:45:39.263819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.596 qpair failed and we were unable to recover it. 00:28:33.596 [2024-12-06 15:45:39.263939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.263974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.264077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.264112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.264249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.264284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.264431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.264469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.264647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.264682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.264790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.264824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.264998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.265032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.265169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.265203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.265313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.265347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.265473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.265509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.265632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.265666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.265777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.265814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.265941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.265976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.266094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.266129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.266303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.266339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.266527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.266563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.266691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.266726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.266844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.266882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.267015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.267049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.267171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.267206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.267317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.267352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.267470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.267504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.267620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.267654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.267852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.267889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.268063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.268098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.268203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.268239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.268403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.268439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.268549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.268583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.268764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.268800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.268917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.268951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.269134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.269175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.269293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.269329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.597 qpair failed and we were unable to recover it. 00:28:33.597 [2024-12-06 15:45:39.269449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.597 [2024-12-06 15:45:39.269485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.269661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.269696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.269939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.269974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.270149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.270184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.270299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.270332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.270565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.270622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.270784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.270818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.270950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.270985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.271095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.271128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.271246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.271280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.271412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.271447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.271562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.271595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.271743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.271778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.272044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.272080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.272263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.272296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.272422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.272458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.272634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.272670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.272803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.272838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.272960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.272994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.273179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.273211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.273327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.273362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.273485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.273520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.273637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.273672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.273805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.273839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.274037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.274070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.274259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.274295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.274416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.274452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.274578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.274611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.274725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.274759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.274865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.274900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.275034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.275071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.275264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.275298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.275535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.275575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.275705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.275740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.275864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.275899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.276158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.276191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.276409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.276443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.598 [2024-12-06 15:45:39.276661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.598 [2024-12-06 15:45:39.276694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.598 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.276806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.276847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.276953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.276986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.277105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.277138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.277256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.277290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.277416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.277452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.277558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.277594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.277715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.277749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.277856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.277888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.278026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.278059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.278176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.278208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.278387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.278422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.278533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.278566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.278697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.278730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.278855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.278886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.278997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.279031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.279213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.279245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.279381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.279415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.279588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.279622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.279738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.279772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.279877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.279910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.280102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.280136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.280260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.280295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.280473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.280507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.280635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.280669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.280772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.280806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.280924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.280957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.281059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.281093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.281314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.281348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.281468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.281502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.281696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.281730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.281903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.281936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.282047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.282080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.282198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.282231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.282361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.282405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.282527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.282560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.282743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.282779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.282898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.599 [2024-12-06 15:45:39.282934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.599 qpair failed and we were unable to recover it. 00:28:33.599 [2024-12-06 15:45:39.283068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.283101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.283217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.283251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.283447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.283481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.283585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.283625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.283759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.283916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.283950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.284147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.284186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.284289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.284325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.284468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.284507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.284631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.284667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.284856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.284890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.285006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.285040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.285155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.285188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.285293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.285328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.285453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.285489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.285663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.285699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.285887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.285921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.286048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.286081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.286187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.286221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.286344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.286392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.286576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.286610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.286717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.286751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.286864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.286896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.287006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.287040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.287242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.287276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.287398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.287433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.287568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.287602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.287728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.287763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.287884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.287917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.288040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.288074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.288197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.288231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.288341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.288403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.288513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.288547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.288748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.288784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.288993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.289028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.289245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.289280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.289406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.600 [2024-12-06 15:45:39.289441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.600 qpair failed and we were unable to recover it. 00:28:33.600 [2024-12-06 15:45:39.289632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.289668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.289779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.289814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.289990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.290023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.290153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.290187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.290457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.290492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.290624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.290659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.290837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.290886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.291009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.291038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.291250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.291281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.291404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.291436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.291604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.291635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.291744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.291774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.291883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.291914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.292145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.292175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.292290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.292322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.292448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.292482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.292698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.292728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.292906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.292936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.293135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.293165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.293401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.293434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.293557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.293588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.293757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.293788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.293968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.293999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.294118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.294149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.294252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.294282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.294451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.294482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.294614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.294644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.294759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.294789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.294929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.294960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.295071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.295103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.295365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.295407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.295517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.295549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.295671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.295701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.295824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.295856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.295967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.295998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.296111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.296142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.296244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.296275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.601 [2024-12-06 15:45:39.296440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.601 [2024-12-06 15:45:39.296473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.601 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.296638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.296669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.296913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.296943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.297067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.297098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.297352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.297408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.297585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.297616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.297710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.297741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.297872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.297904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.298077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.298108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.298211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.298249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.298356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.298395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.298565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.298596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.298719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.298750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.298932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.298962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.299092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.299124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.299321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.299351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.299469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.299500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.299678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.299707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.299883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.299913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.300021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.300052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.300151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.300181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.300288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.300320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.300458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.300492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.300679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.300711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.300912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.300943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.301131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.301163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.301280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.301310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.301432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.301466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.301646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.301677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.301856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.301886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.301996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.302026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.302312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.302342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.302569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.602 [2024-12-06 15:45:39.302600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.602 qpair failed and we were unable to recover it. 00:28:33.602 [2024-12-06 15:45:39.302712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.302741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.302921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.302952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.303059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.303089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.303244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.303275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.303392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.303424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.303683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.303713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.303842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.303873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.303989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.304019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.304180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.304210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.304321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.304352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.304497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.304529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.304629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.304660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.304862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.304892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.305004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.305035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.305131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.305161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.305279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.305309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.305469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.305505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.305682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.305713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.305829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.305860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.306100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.306131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.306262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.306291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.306420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.306452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.306615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.306645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.306893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.306923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.307040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.307071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.307174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.307205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.307390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.307422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.307537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.307569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.307755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.307787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.307901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.307933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.308105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.308136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.308264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.308296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.308412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.308444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.308631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.308660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.308864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.308895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.309000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.309031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.309141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.309170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.309307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.603 [2024-12-06 15:45:39.309338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.603 qpair failed and we were unable to recover it. 00:28:33.603 [2024-12-06 15:45:39.309523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.309556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.309671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.309703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.309885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.309916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.310023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.310054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.310179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.310209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.310390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.310421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.310527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.310557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.310748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.310779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.310903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.310934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.604 [2024-12-06 15:45:39.311049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.311081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.311263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:33.604 [2024-12-06 15:45:39.311293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.311407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.311440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.311543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.311574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.604 [2024-12-06 15:45:39.311673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.311706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.311837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.311869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.604 [2024-12-06 15:45:39.312004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.312036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.312207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.604 [2024-12-06 15:45:39.312239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.312347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.312406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.312518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.312549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.312658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.312689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.312801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.312831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.313002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.313032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.313217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.313247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.313358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.313397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.313516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.313548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.313653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.313684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.313803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.313835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.313937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.313968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.314076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.314107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.314287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.314318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.314546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.314579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.314752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.314784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.314913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.314944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.604 [2024-12-06 15:45:39.315051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.604 [2024-12-06 15:45:39.315081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.604 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.315184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.315215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.315321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.315351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.315628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.315660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.315779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.315810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.315931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.315961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.316138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.316170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.316293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.316324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.316545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.316577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.316758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.316789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.317005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.317037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.317134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.317164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.317279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.317308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.317433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.317465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.317674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.317706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.317818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.317849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.318032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.318063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.318226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.318258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.318374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.318408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.318522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.318557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.318740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.318771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.318936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.318966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.319085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.319117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.319287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.319322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.319450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.319484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.319608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.319638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.319761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.319793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.319897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.319928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.320091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.320121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.320303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.320334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.320465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.320498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.320618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.320648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.320762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.320793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.320899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.320929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.321046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.321079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.321193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.321222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.321333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.605 [2024-12-06 15:45:39.321364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.605 qpair failed and we were unable to recover it. 00:28:33.605 [2024-12-06 15:45:39.321492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.321525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.321707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.321741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.321909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.321939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.322070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.322106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.322225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.322256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.322355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.322399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.322512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.322543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.322645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.322676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.322772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.322802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.322907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.322939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.323110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.323141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.323320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.323353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.323546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.323579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.323690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.323722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.323828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.323859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.323975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.324006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.324111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.324142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.324296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.324326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.324458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.324490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.324669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.324702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.324814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.324846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.324951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.324983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.325085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.325115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.325289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.325319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.325443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.325475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.325576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.325606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.325790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.325826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.325938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.325972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.326137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.326167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.326276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.326308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.326443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.326475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.326590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.326620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.326800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.326831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.326939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.326969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.327078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.327108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.327284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.327316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.606 [2024-12-06 15:45:39.327433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.606 [2024-12-06 15:45:39.327464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.606 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.327580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.327612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.327731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.327762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.327947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.327980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.328164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.328195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.328302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.328333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.328448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.328478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.328594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.328624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.328722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.328752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.328874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.328904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.329003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.329032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.329153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.329185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.329290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.329320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.329496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.329528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.329629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.329660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.329769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.329909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.329940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.330053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.330085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.330201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.330232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.330355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.330397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.330507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.330540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.330643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.330674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.330776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.330806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.330941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.330973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.331075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.331107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.331271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.331298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.331407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.331436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.331551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.331578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.331677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.331705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.331820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.331849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.332080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.332113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.332210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.332237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.332347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.332383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.332481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.607 [2024-12-06 15:45:39.332510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.607 qpair failed and we were unable to recover it. 00:28:33.607 [2024-12-06 15:45:39.332616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.332645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.332743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.332772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.332873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.332901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.333022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.333050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.333160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.333188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.333292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.333321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.333428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.333459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.333563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.333592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.333689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.333718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.333889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.333917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.334966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.334994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.335089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.335118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.335218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.335246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.335345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.335381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.335545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.335573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.335676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.335706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.335854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.335912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.336079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.336140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.336272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.336308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.336456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.336492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.336610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.336645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.336782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.336817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.336996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.337029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.337154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.337186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.337291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.337325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.337442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.337474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.337593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.337624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.337723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.337750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.337843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.337870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.337984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.338018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.338186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.608 [2024-12-06 15:45:39.338213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.608 qpair failed and we were unable to recover it. 00:28:33.608 [2024-12-06 15:45:39.338320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.338348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.338467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.338496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.338682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.338714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.338826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.338856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.338964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.338994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.339090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.339120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.339218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.339245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.339345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.339383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.339491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.339519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.339632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.339660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.339753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.339781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.339882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.339912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.340025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.340054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.340255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.340284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.340399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.340427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.340539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.340566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.340666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.340694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.340808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.340835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.340941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.340969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.341074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.341103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.341212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.341240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.341405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.341434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.341556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.341588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.341682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.341709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.341805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.341831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.341950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.341990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.342151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.342206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.342330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.342366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.342498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.342533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.342646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.342681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.342804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.342840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.342943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.342978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.343112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.343147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.343258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.343292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.343401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.343433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.343558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.343588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.343680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.343710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.343802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.609 [2024-12-06 15:45:39.343832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.609 qpair failed and we were unable to recover it. 00:28:33.609 [2024-12-06 15:45:39.344006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.344136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.344269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.344397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.344535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.344685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.344810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.344947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.344974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.345083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.345112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.345213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.345241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.345420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.345449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.345568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.345597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.345689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.345718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.345895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.345924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.346043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.346073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.610 [2024-12-06 15:45:39.346182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.346212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.346387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.346416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.346520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:33.610 [2024-12-06 15:45:39.346549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.346668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.346698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.346807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.346836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.610 [2024-12-06 15:45:39.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.346984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.347075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.347104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.610 [2024-12-06 15:45:39.347205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.347235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.347327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.347354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.347458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.347488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e84000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.347672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.347714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.347835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.347870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.348048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.348091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.348213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.348248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.348364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.348410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.348531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.348564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.348682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.348717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.348833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.348867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.348988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.349022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.349138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.349171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.610 [2024-12-06 15:45:39.349275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.610 [2024-12-06 15:45:39.349308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.610 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.349570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.349604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.349717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.349750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.349860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.349903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.350018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.350051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.350153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.350185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.350304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.350337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.350541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.350575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.350690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.350725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.350918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.350953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.351072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.351104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.351292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.351327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.351465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.351501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.351638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.351672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.351844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.351877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.352007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.352038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.352253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.352289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.352429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.352464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.352589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.352623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.352744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.352778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.352972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.353004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.353118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.353151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.353335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.353379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.353556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.353590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.353713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.353747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.353858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.353890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.354007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.354041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.354211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.354244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.354357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.354401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.354512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.354546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.354728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.354767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.354895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.354929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.355053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.611 [2024-12-06 15:45:39.355088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.611 qpair failed and we were unable to recover it. 00:28:33.611 [2024-12-06 15:45:39.355197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.355231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.355414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.355449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.355564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.355598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.355785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.355819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.355923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.355956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.356083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.356117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.356292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.356326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.356517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.356552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.356671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.356704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.356827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.356860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.356985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.357018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.357199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.357233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.357339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.357385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.357502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.357535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.357667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.357702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.357826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.357860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.358061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.358094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.358323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.358357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.358495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.358531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.358639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.358673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.358801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.358835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.358966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.358999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.359108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.359141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.359265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.359297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.359499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.359534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.359643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.359678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.359786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.359821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.359944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.359977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.360114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.360149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.360326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.360360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.360496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.360531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.360659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.360693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.360801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.360836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.360943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.360977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.361159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.361193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.361394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.361431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.361539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.361574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.612 [2024-12-06 15:45:39.361679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.612 [2024-12-06 15:45:39.361722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.612 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.361964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.362000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.362108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.362141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.362276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.362311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.362493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.362528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.362799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.362834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.362954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.362988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.363097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.363132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.363257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.363292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.363400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.363435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.363679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.363715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.363822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.363858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.363991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.364025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.364128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.364162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.364278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.364312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.364435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.364471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.364611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.364645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.364840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.364874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.365060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.365095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.365270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.365302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.365432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.365467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.365582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.365615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.365742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.365776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.365897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.365931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.366055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.366091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.366195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.366229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.366400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.366435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.366634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.366668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.366777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.366810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.366944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.367130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.367165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.367332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.367365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.367544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.367578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.367727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.367763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.367988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.368022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.368141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.368175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.368353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.368395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.368508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.613 [2024-12-06 15:45:39.368542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.613 qpair failed and we were unable to recover it. 00:28:33.613 [2024-12-06 15:45:39.368648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.368683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.368876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.368909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.369017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.369056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.369170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.369203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.369343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.369387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.369568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.369603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.369779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.369814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.370017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.370053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.370177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.370213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.370331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.370378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.370503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.370539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.370674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.370708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.370901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.370936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.371047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.371082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.371206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.371243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.371384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.371420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.371605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.371640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.371745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.371779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.372022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.372060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.372166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.372201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.372317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.372352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.372556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.372592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.372735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.372770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.372877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.372912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.373091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.373126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.373258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.373293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.373421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.373456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.373642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.373676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.373796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.373832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.374081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.374115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.374238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.374272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.374518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.374553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.374732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.374765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.374886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.374920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.375054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.375089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.375264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.614 [2024-12-06 15:45:39.375298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.614 qpair failed and we were unable to recover it. 00:28:33.614 [2024-12-06 15:45:39.375493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 Malloc0 00:28:33.615 [2024-12-06 15:45:39.375530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.375644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.375678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.375792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.375826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.615 [2024-12-06 15:45:39.376007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.376041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.376160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.376196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.376385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.615 [2024-12-06 15:45:39.376424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.376550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.376586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.376784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.376818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.376990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.377023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.377142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.377176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.377307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.377342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.377482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.377517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.377692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.377726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.377919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.377954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.378066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.378101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.378236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.378270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.378394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.378430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.378549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.378583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.378713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.378749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.378989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.379024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.379129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.379163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.379284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.379318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.379505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.379542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.379656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.379691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.379812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.379846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.379964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.379997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.380127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.380160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.380311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.380345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.380492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.380526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.380649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.380683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.380799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.380834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.380969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.381019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.381216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.381250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.381360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.381405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.381515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.381549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.615 [2024-12-06 15:45:39.381736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.615 [2024-12-06 15:45:39.381769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.615 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.381890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.381924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.382033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.382066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.382304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.382337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.382547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.382599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.382724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.382760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.382875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.382910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.383022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.383036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.616 [2024-12-06 15:45:39.383056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.383244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.383278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.383401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.383446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.383633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.383668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.383777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.383811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.383932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.383967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.384161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.384195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.384384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.384420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.384610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.384645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.384767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.384805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.384929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.384962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.385081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.385114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.385300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.385333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.385473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.385511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.385638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.385672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.385782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.385816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.386007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.386041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.386147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.386181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.386306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.386340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.386467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.386501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.386688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.386721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.386834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.386868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.386987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.387022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.387194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.387228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.387344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.387389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.387538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.387573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.387674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.387708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.616 qpair failed and we were unable to recover it. 00:28:33.616 [2024-12-06 15:45:39.387818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.616 [2024-12-06 15:45:39.387850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.388035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.388068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.388314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.388347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.388551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.388585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.388703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.388738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.388856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.388890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.389000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.389033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.389207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.389240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.389345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.389410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.389516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.389549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.389652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.389687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.389878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.389912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.390016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.390049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.390158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.390192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.390297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.390331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.390450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.390490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.390620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.390654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.390918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.390952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.391061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.391094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.391211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.391245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.391433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.391468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.391648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.391681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.617 [2024-12-06 15:45:39.391856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.391891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.392064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.617 [2024-12-06 15:45:39.392099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.392257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.392292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.617 [2024-12-06 15:45:39.392535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.392570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.392690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.617 [2024-12-06 15:45:39.392724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.392850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.392883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.393061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.393097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.393206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.393240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.393364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.393405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.393555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.393588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.393718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.393752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.393946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.393979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.394103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.394137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.394267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.394302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.617 qpair failed and we were unable to recover it. 00:28:33.617 [2024-12-06 15:45:39.394481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.617 [2024-12-06 15:45:39.394516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.394620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.394653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.394767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.394801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.394980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.395015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.395220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.395258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.395508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.395544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.395655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.395691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.395808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.395846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.396026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.396059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.396192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.396225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.396331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.396364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.396492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.396526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.396632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.396666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.396942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.396977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.397099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.397132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.397251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.397283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.397470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.397506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.397630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.397669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.397780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.397812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.397926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.397958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.398134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.398167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.398424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.398459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.398588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.398621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.398794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.398834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.398957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.398993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.399112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.399145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.399261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.399294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.399411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.399444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.399556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.399590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.399700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.399734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.618 [2024-12-06 15:45:39.399905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.399947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.400076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.400110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:33.618 [2024-12-06 15:45:39.400227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.400261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.400388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.400424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.618 [2024-12-06 15:45:39.400602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.400637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.618 [2024-12-06 15:45:39.400813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-06 15:45:39.400849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.618 qpair failed and we were unable to recover it. 00:28:33.618 [2024-12-06 15:45:39.400960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.400995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.401107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.401142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.401249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.401285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.401400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.401436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.401548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.401583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.401687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.401723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.401843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.401888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.402135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.402167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.402342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.402408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.402534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.402569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.402773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.402808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.402921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.402954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.403072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.403106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.403225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.403258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.403379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.403414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.403614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.403646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.403825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.403858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.403977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.404010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.404121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.404155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.404266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.404300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.404489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.404524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.404657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.404690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.404876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.404912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.405019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.405053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.405173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.405208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.405396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.405434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.405552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.405587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.405765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.405800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.405906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.405939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.406049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.406082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.406194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.406229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.406363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.406418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.406547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.406581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.406695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.406728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.406904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.406939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.407053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.407087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.407220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.407254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.619 [2024-12-06 15:45:39.407380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-06 15:45:39.407415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.619 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.407592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.407626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.407755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.407791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.407909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.407943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.408132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.408166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.620 [2024-12-06 15:45:39.408344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.408390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.408516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.620 [2024-12-06 15:45:39.408551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.408724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.408758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.620 [2024-12-06 15:45:39.408887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.408921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.409041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.409075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.409193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.409229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.409356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.409400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.409623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.409827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.409862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.409987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.410022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.410148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.410181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.410305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.410339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e80000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.410475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.410515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2179be0 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.410661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.410702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.410856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.410974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.411006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.411146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-06 15:45:39.411183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6e8c000b90 with addr=10.0.0.2, port=4420 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 [2024-12-06 15:45:39.411261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.620 [2024-12-06 15:45:39.413663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.620 [2024-12-06 15:45:39.413769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.620 [2024-12-06 15:45:39.413817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.620 [2024-12-06 15:45:39.413844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.620 [2024-12-06 15:45:39.413866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.620 [2024-12-06 15:45:39.413920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.620 [2024-12-06 15:45:39.423603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.620 [2024-12-06 15:45:39.423696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.620 [2024-12-06 15:45:39.423726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.620 [2024-12-06 15:45:39.423742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.620 [2024-12-06 15:45:39.423756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.620 [2024-12-06 15:45:39.423790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.620 qpair failed and we were unable to recover it. 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.620 15:45:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3170571 00:28:33.620 [2024-12-06 15:45:39.433648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.620 [2024-12-06 15:45:39.433714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.620 [2024-12-06 15:45:39.433735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.433745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.433755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.433785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.443557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.443644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.443661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.443670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.443678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.443695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.453575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.453632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.453647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.453654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.453661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.453676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.463613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.463667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.463682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.463689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.463696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.463711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.473561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.473610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.473625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.473632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.473638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.473654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.483575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.483633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.483650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.483657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.483664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.483679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.493724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.493778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.493791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.493798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.493805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.493820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.503665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.503716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.503731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.503738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.503746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.503762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.513689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.513748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.513762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.513769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.513775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.513790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.523701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.523757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.523770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.523780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.523787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.523802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.533736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.533792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.533806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.533813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.533819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.533834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.543814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.543872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.543885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.543892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.543898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.543913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.553860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.621 [2024-12-06 15:45:39.553913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.621 [2024-12-06 15:45:39.553928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.621 [2024-12-06 15:45:39.553935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.621 [2024-12-06 15:45:39.553942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.621 [2024-12-06 15:45:39.553958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.621 qpair failed and we were unable to recover it. 00:28:33.621 [2024-12-06 15:45:39.563898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.622 [2024-12-06 15:45:39.563976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.622 [2024-12-06 15:45:39.563990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.622 [2024-12-06 15:45:39.563997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.622 [2024-12-06 15:45:39.564004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.622 [2024-12-06 15:45:39.564019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.622 qpair failed and we were unable to recover it. 00:28:33.880 [2024-12-06 15:45:39.573943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.880 [2024-12-06 15:45:39.574006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.880 [2024-12-06 15:45:39.574019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.880 [2024-12-06 15:45:39.574026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.880 [2024-12-06 15:45:39.574033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.880 [2024-12-06 15:45:39.574048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-12-06 15:45:39.583897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.880 [2024-12-06 15:45:39.583955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.880 [2024-12-06 15:45:39.583969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.880 [2024-12-06 15:45:39.583976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.880 [2024-12-06 15:45:39.583982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.880 [2024-12-06 15:45:39.583997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-12-06 15:45:39.593893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.880 [2024-12-06 15:45:39.593947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.880 [2024-12-06 15:45:39.593962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.880 [2024-12-06 15:45:39.593970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.880 [2024-12-06 15:45:39.593976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.880 [2024-12-06 15:45:39.593992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-12-06 15:45:39.604032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.880 [2024-12-06 15:45:39.604089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.880 [2024-12-06 15:45:39.604103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.880 [2024-12-06 15:45:39.604110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.880 [2024-12-06 15:45:39.604117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.880 [2024-12-06 15:45:39.604132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.880 qpair failed and we were unable to recover it. 00:28:33.880 [2024-12-06 15:45:39.614045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.614105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.614120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.614127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.614133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.614148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.624065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.624125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.624139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.624146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.624153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.624168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.634152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.634232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.634245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.634253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.634259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.634274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.644117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.644180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.644194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.644201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.644207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.644222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.654152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.654245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.654259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.654270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.654276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.654291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.664101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.664173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.664188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.664195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.664202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.664218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.674183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.674235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.674249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.674256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.674263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.674278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.684223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.684278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.684292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.684299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.684307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.684323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.694274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.694326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.694340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.694347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.694354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.694376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.704294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.704348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.704362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.704373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.704380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.704395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.714311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.714362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.714381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.714388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.714394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.714409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.724274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.724332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.724346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.724353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.724360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.724379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.734411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.881 [2024-12-06 15:45:39.734464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.881 [2024-12-06 15:45:39.734477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.881 [2024-12-06 15:45:39.734485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.881 [2024-12-06 15:45:39.734491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.881 [2024-12-06 15:45:39.734507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.881 qpair failed and we were unable to recover it. 00:28:33.881 [2024-12-06 15:45:39.744404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.744459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.744473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.744481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.744487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.744502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.754458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.754512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.754525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.754533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.754540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.754555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.764467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.764526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.764539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.764548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.764554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.764569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.774529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.774618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.774633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.774640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.774646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.774660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.784523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.784578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.784594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.784602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.784608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.784623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.794554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.794606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.794620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.794627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.794634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.794649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.804618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.804676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.804692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.804699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.804706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.804721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.814622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.814697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.814711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.814718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.814724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.814739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.824636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.824689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.824702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.824709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.824719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.824734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.834659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.834715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.834728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.834735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.834742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.834756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.844698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.844758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.844771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.844779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.844785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.844800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.854711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.854768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.854782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.854789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.854795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.854810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.864738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.882 [2024-12-06 15:45:39.864793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.882 [2024-12-06 15:45:39.864806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.882 [2024-12-06 15:45:39.864813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.882 [2024-12-06 15:45:39.864819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.882 [2024-12-06 15:45:39.864834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.882 qpair failed and we were unable to recover it. 00:28:33.882 [2024-12-06 15:45:39.874786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.883 [2024-12-06 15:45:39.874842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.883 [2024-12-06 15:45:39.874856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.883 [2024-12-06 15:45:39.874862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.883 [2024-12-06 15:45:39.874869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:33.883 [2024-12-06 15:45:39.874883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:33.883 qpair failed and we were unable to recover it. 00:28:34.141 [2024-12-06 15:45:39.884840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.141 [2024-12-06 15:45:39.884912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.141 [2024-12-06 15:45:39.884925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.141 [2024-12-06 15:45:39.884932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.141 [2024-12-06 15:45:39.884938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.141 [2024-12-06 15:45:39.884952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.141 qpair failed and we were unable to recover it. 00:28:34.141 [2024-12-06 15:45:39.894850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.141 [2024-12-06 15:45:39.894923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.141 [2024-12-06 15:45:39.894937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.141 [2024-12-06 15:45:39.894944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.141 [2024-12-06 15:45:39.894950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.141 [2024-12-06 15:45:39.894965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.141 qpair failed and we were unable to recover it. 00:28:34.141 [2024-12-06 15:45:39.904853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.141 [2024-12-06 15:45:39.904905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.141 [2024-12-06 15:45:39.904918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.141 [2024-12-06 15:45:39.904925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.141 [2024-12-06 15:45:39.904931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.141 [2024-12-06 15:45:39.904946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.141 qpair failed and we were unable to recover it. 00:28:34.141 [2024-12-06 15:45:39.914867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.141 [2024-12-06 15:45:39.914924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.141 [2024-12-06 15:45:39.914941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.141 [2024-12-06 15:45:39.914949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.141 [2024-12-06 15:45:39.914955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.141 [2024-12-06 15:45:39.914970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.141 qpair failed and we were unable to recover it. 00:28:34.141 [2024-12-06 15:45:39.924905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.141 [2024-12-06 15:45:39.924963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.141 [2024-12-06 15:45:39.924977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.141 [2024-12-06 15:45:39.924985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.141 [2024-12-06 15:45:39.924991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.141 [2024-12-06 15:45:39.925005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.141 qpair failed and we were unable to recover it. 00:28:34.141 [2024-12-06 15:45:39.934934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.141 [2024-12-06 15:45:39.934993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.141 [2024-12-06 15:45:39.935007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.141 [2024-12-06 15:45:39.935014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.141 [2024-12-06 15:45:39.935021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.141 [2024-12-06 15:45:39.935035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.141 qpair failed and we were unable to recover it. 00:28:34.141 [2024-12-06 15:45:39.945004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.141 [2024-12-06 15:45:39.945102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.141 [2024-12-06 15:45:39.945116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.141 [2024-12-06 15:45:39.945124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.141 [2024-12-06 15:45:39.945130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:39.945145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:39.954983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:39.955036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:39.955049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:39.955057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:39.955066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:39.955081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:39.965013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:39.965087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:39.965101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:39.965108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:39.965115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:39.965129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:39.975075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:39.975139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:39.975152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:39.975159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:39.975166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:39.975180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:39.985133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:39.985238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:39.985254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:39.985261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:39.985267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:39.985282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:39.995103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:39.995158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:39.995172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:39.995179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:39.995186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:39.995200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.005183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.005240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.005256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.005264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.005270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:40.005286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.015195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.015263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.015278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.015286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.015294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:40.015311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.025184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.025238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.025251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.025259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.025265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:40.025280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.035217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.035293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.035307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.035315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.035321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:40.035337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.045313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.045375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.045395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.045404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.045411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:40.045428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.055355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.055419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.055434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.055441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.055448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:40.055463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.065371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.065429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.065443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.065450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.065457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.142 [2024-12-06 15:45:40.065472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.142 qpair failed and we were unable to recover it. 00:28:34.142 [2024-12-06 15:45:40.075346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.142 [2024-12-06 15:45:40.075414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.142 [2024-12-06 15:45:40.075428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.142 [2024-12-06 15:45:40.075436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.142 [2024-12-06 15:45:40.075442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.143 [2024-12-06 15:45:40.075458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.143 qpair failed and we were unable to recover it. 00:28:34.143 [2024-12-06 15:45:40.085384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.143 [2024-12-06 15:45:40.085443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.143 [2024-12-06 15:45:40.085457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.143 [2024-12-06 15:45:40.085467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.143 [2024-12-06 15:45:40.085473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.143 [2024-12-06 15:45:40.085489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.143 qpair failed and we were unable to recover it. 00:28:34.143 [2024-12-06 15:45:40.095438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.143 [2024-12-06 15:45:40.095496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.143 [2024-12-06 15:45:40.095510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.143 [2024-12-06 15:45:40.095518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.143 [2024-12-06 15:45:40.095524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.143 [2024-12-06 15:45:40.095540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.143 qpair failed and we were unable to recover it. 00:28:34.143 [2024-12-06 15:45:40.105422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.143 [2024-12-06 15:45:40.105477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.143 [2024-12-06 15:45:40.105491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.143 [2024-12-06 15:45:40.105498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.143 [2024-12-06 15:45:40.105505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.143 [2024-12-06 15:45:40.105521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.143 qpair failed and we were unable to recover it. 00:28:34.143 [2024-12-06 15:45:40.115489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.143 [2024-12-06 15:45:40.115543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.143 [2024-12-06 15:45:40.115557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.143 [2024-12-06 15:45:40.115565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.143 [2024-12-06 15:45:40.115571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.143 [2024-12-06 15:45:40.115586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.143 qpair failed and we were unable to recover it. 00:28:34.143 [2024-12-06 15:45:40.125473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.143 [2024-12-06 15:45:40.125528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.143 [2024-12-06 15:45:40.125542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.143 [2024-12-06 15:45:40.125549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.143 [2024-12-06 15:45:40.125556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.143 [2024-12-06 15:45:40.125571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.143 qpair failed and we were unable to recover it. 00:28:34.143 [2024-12-06 15:45:40.135577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.143 [2024-12-06 15:45:40.135640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.143 [2024-12-06 15:45:40.135653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.143 [2024-12-06 15:45:40.135660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.143 [2024-12-06 15:45:40.135667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.143 [2024-12-06 15:45:40.135681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.143 qpair failed and we were unable to recover it. 00:28:34.401 [2024-12-06 15:45:40.145553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.401 [2024-12-06 15:45:40.145614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.401 [2024-12-06 15:45:40.145628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.401 [2024-12-06 15:45:40.145635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.401 [2024-12-06 15:45:40.145641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.401 [2024-12-06 15:45:40.145656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.401 qpair failed and we were unable to recover it. 00:28:34.401 [2024-12-06 15:45:40.155573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.401 [2024-12-06 15:45:40.155666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.401 [2024-12-06 15:45:40.155679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.401 [2024-12-06 15:45:40.155686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.155692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.155707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.165615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.165688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.165701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.165709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.165715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.165732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.175562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.175623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.175637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.175644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.175650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.175664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.185664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.185720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.185733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.185741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.185747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.185762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.195689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.195742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.195755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.195762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.195769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.195783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.205719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.205780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.205793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.205800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.205807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.205822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.215745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.215798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.215813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.215823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.215830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.215845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.225836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.225896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.225911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.225918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.225924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.225939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.235778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.235877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.235891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.235898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.235904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.235918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.245916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.245975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.245988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.245995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.246001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.246016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.255922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.255983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.255997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.256004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.256010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.256028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.265894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.265980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.265994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.266001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.266007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.266022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.275904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.275965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.275979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.402 [2024-12-06 15:45:40.275986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.402 [2024-12-06 15:45:40.275993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.402 [2024-12-06 15:45:40.276008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.402 qpair failed and we were unable to recover it. 00:28:34.402 [2024-12-06 15:45:40.285954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.402 [2024-12-06 15:45:40.286013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.402 [2024-12-06 15:45:40.286027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.286034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.286040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.286054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.295974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.296036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.296071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.296079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.296085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.296109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.305985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.306041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.306057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.306066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.306073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.306091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.315961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.316010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.316024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.316031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.316037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.316053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.326054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.326111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.326125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.326132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.326138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.326154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.336028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.336116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.336129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.336137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.336143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.336158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.346096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.346147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.346164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.346171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.346177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.346192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.356126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.356183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.356197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.356203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.356210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.356226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.366162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.366239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.366252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.366259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.366266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.366280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.376216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.376298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.376312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.376320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.376326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.376341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.386209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.386267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.386281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.386288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.386298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.386314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.403 [2024-12-06 15:45:40.396302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.403 [2024-12-06 15:45:40.396358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.403 [2024-12-06 15:45:40.396377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.403 [2024-12-06 15:45:40.396384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.403 [2024-12-06 15:45:40.396390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.403 [2024-12-06 15:45:40.396405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.403 qpair failed and we were unable to recover it. 00:28:34.662 [2024-12-06 15:45:40.406318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.662 [2024-12-06 15:45:40.406381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.662 [2024-12-06 15:45:40.406395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.662 [2024-12-06 15:45:40.406402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.662 [2024-12-06 15:45:40.406408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.662 [2024-12-06 15:45:40.406423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.662 qpair failed and we were unable to recover it. 00:28:34.662 [2024-12-06 15:45:40.416322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.662 [2024-12-06 15:45:40.416403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.662 [2024-12-06 15:45:40.416418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.662 [2024-12-06 15:45:40.416425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.662 [2024-12-06 15:45:40.416432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.662 [2024-12-06 15:45:40.416448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.662 qpair failed and we were unable to recover it. 00:28:34.662 [2024-12-06 15:45:40.426330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.662 [2024-12-06 15:45:40.426388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.662 [2024-12-06 15:45:40.426402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.662 [2024-12-06 15:45:40.426409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.662 [2024-12-06 15:45:40.426416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.662 [2024-12-06 15:45:40.426430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.662 qpair failed and we were unable to recover it. 00:28:34.662 [2024-12-06 15:45:40.436358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.662 [2024-12-06 15:45:40.436412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.662 [2024-12-06 15:45:40.436427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.662 [2024-12-06 15:45:40.436435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.662 [2024-12-06 15:45:40.436441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.662 [2024-12-06 15:45:40.436455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.662 qpair failed and we were unable to recover it. 00:28:34.662 [2024-12-06 15:45:40.446404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.662 [2024-12-06 15:45:40.446480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.662 [2024-12-06 15:45:40.446494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.662 [2024-12-06 15:45:40.446501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.662 [2024-12-06 15:45:40.446507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.662 [2024-12-06 15:45:40.446523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.662 qpair failed and we were unable to recover it. 00:28:34.662 [2024-12-06 15:45:40.456452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.662 [2024-12-06 15:45:40.456506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.662 [2024-12-06 15:45:40.456519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.662 [2024-12-06 15:45:40.456526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.662 [2024-12-06 15:45:40.456532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.662 [2024-12-06 15:45:40.456547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.662 qpair failed and we were unable to recover it. 00:28:34.662 [2024-12-06 15:45:40.466452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.662 [2024-12-06 15:45:40.466528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.662 [2024-12-06 15:45:40.466541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.662 [2024-12-06 15:45:40.466548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.662 [2024-12-06 15:45:40.466554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.466569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.476507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.476563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.476580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.476587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.476593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.476608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.486430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.486486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.486500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.486507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.486513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.486528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.496527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.496583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.496597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.496604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.496611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.496625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.506547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.506604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.506617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.506624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.506631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.506646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.516571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.516670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.516684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.516691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.516700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.516715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.526619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.526673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.526687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.526694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.526701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.526715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.536644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.536696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.536711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.536718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.536724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.536739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.546678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.546733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.546747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.546754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.546761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.546774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.556735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.556788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.556802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.556809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.556816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.556830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.566746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.566810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.566824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.566831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.566837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.566852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.576788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.576848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.576862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.576869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.576875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.576890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.586786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.586839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.586852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.586859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.586866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.663 [2024-12-06 15:45:40.586880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.663 qpair failed and we were unable to recover it. 00:28:34.663 [2024-12-06 15:45:40.596797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.663 [2024-12-06 15:45:40.596851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.663 [2024-12-06 15:45:40.596865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.663 [2024-12-06 15:45:40.596872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.663 [2024-12-06 15:45:40.596879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.664 [2024-12-06 15:45:40.596894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.664 qpair failed and we were unable to recover it. 00:28:34.664 [2024-12-06 15:45:40.606844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.664 [2024-12-06 15:45:40.606903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.664 [2024-12-06 15:45:40.606921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.664 [2024-12-06 15:45:40.606928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.664 [2024-12-06 15:45:40.606934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.664 [2024-12-06 15:45:40.606949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.664 qpair failed and we were unable to recover it. 00:28:34.664 [2024-12-06 15:45:40.616882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.664 [2024-12-06 15:45:40.616937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.664 [2024-12-06 15:45:40.616952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.664 [2024-12-06 15:45:40.616959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.664 [2024-12-06 15:45:40.616965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.664 [2024-12-06 15:45:40.616980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.664 qpair failed and we were unable to recover it. 00:28:34.664 [2024-12-06 15:45:40.626899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.664 [2024-12-06 15:45:40.626955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.664 [2024-12-06 15:45:40.626971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.664 [2024-12-06 15:45:40.626979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.664 [2024-12-06 15:45:40.626986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.664 [2024-12-06 15:45:40.627001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.664 qpair failed and we were unable to recover it. 00:28:34.664 [2024-12-06 15:45:40.636943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.664 [2024-12-06 15:45:40.636999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.664 [2024-12-06 15:45:40.637013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.664 [2024-12-06 15:45:40.637022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.664 [2024-12-06 15:45:40.637028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.664 [2024-12-06 15:45:40.637043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.664 qpair failed and we were unable to recover it. 00:28:34.664 [2024-12-06 15:45:40.646990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.664 [2024-12-06 15:45:40.647050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.664 [2024-12-06 15:45:40.647063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.664 [2024-12-06 15:45:40.647074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.664 [2024-12-06 15:45:40.647080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.664 [2024-12-06 15:45:40.647095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.664 qpair failed and we were unable to recover it. 00:28:34.923 [2024-12-06 15:45:40.657021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.923 [2024-12-06 15:45:40.657091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.923 [2024-12-06 15:45:40.657110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.923 [2024-12-06 15:45:40.657117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.923 [2024-12-06 15:45:40.657123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.923 [2024-12-06 15:45:40.657138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.923 qpair failed and we were unable to recover it. 00:28:34.923 [2024-12-06 15:45:40.667065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.923 [2024-12-06 15:45:40.667131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.923 [2024-12-06 15:45:40.667145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.923 [2024-12-06 15:45:40.667152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.923 [2024-12-06 15:45:40.667158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.923 [2024-12-06 15:45:40.667174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.923 qpair failed and we were unable to recover it. 00:28:34.923 [2024-12-06 15:45:40.677038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.923 [2024-12-06 15:45:40.677095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.923 [2024-12-06 15:45:40.677109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.923 [2024-12-06 15:45:40.677116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.923 [2024-12-06 15:45:40.677122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.923 [2024-12-06 15:45:40.677136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.923 qpair failed and we were unable to recover it. 00:28:34.923 [2024-12-06 15:45:40.687142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.923 [2024-12-06 15:45:40.687199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.923 [2024-12-06 15:45:40.687213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.923 [2024-12-06 15:45:40.687220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.923 [2024-12-06 15:45:40.687226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.687243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.697131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.697190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.697204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.697211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.697217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.697232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.707079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.707131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.707145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.707152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.707159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.707173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.717166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.717221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.717235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.717242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.717248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.717263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.727199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.727256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.727270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.727277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.727283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.727298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.737221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.737307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.737321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.737328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.737335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.737350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.747247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.747303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.747317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.747324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.747331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.747346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.757248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.757303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.757318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.757326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.757333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.757347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.767303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.767360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.767378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.767385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.767392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.767407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.777324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.777386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.777400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.777411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.777417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.777432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.787415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.787504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.787519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.787526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.787532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.787546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.797311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.797401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.797415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.924 [2024-12-06 15:45:40.797423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.924 [2024-12-06 15:45:40.797429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.924 [2024-12-06 15:45:40.797444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.924 qpair failed and we were unable to recover it. 00:28:34.924 [2024-12-06 15:45:40.807418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.924 [2024-12-06 15:45:40.807477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.924 [2024-12-06 15:45:40.807493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.807501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.807507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.807523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.817482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.817543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.817557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.817564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.817570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.817589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.827391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.827446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.827460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.827467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.827473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.827488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.837440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.837530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.837545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.837552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.837562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.837578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.847509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.847604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.847618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.847625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.847631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.847646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.857570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.857621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.857633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.857640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.857647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.857661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.867639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.867693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.867706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.867713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.867719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.867734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.877633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.877714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.877728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.877735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.877741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.877755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.887708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.887762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.887776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.887782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.887789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.887804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.897715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.897770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.897784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.897791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.897798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.897812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.907718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.907773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.907792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.907799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.907806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.907821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:34.925 [2024-12-06 15:45:40.917758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.925 [2024-12-06 15:45:40.917818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.925 [2024-12-06 15:45:40.917831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.925 [2024-12-06 15:45:40.917838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.925 [2024-12-06 15:45:40.917844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:34.925 [2024-12-06 15:45:40.917859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.925 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.927717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.927780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.927793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.927800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.927806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.927821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.937719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.937777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.937791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.937798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.937804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.937819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.947808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.947864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.947877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.947884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.947894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.947909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.957877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.957932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.957945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.957952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.957958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.957973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.967818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.967873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.967887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.967893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.967899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.967914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.977823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.977880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.977895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.977902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.977908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.977922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.987914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.987962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.987975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.987982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.987988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.988003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:40.997876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:40.997927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:40.997941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:40.997948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:40.997954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:40.997969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:41.007939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:41.007997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:41.008011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:41.008018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:41.008024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:41.008039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:41.018038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:41.018097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:41.018111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:41.018119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:41.018125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:41.018139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:41.027963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:41.028014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:41.028028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:41.028035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:41.028041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:41.028055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:41.038106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:41.038159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:41.038175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:41.038182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:41.038189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:41.038203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:41.048121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:41.048176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:41.048191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:41.048198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:41.048204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:41.048220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:41.058114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:41.058168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:41.058184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:41.058192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.185 [2024-12-06 15:45:41.058198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.185 [2024-12-06 15:45:41.058214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.185 qpair failed and we were unable to recover it. 00:28:35.185 [2024-12-06 15:45:41.068171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.185 [2024-12-06 15:45:41.068229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.185 [2024-12-06 15:45:41.068244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.185 [2024-12-06 15:45:41.068251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.068258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.068273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.078185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.078238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.078252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.078259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.078269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.078284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.088279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.088335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.088349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.088356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.088363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.088382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.098241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.098302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.098316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.098323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.098330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.098345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.108294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.108349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.108364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.108374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.108381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.108396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.118307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.118361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.118380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.118387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.118394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.118409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.128334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.128393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.128407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.128414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.128420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.128435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.138354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.138417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.138431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.138438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.138444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.138459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.148378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.148434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.148448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.148456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.148462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.148477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.158447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.158503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.158516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.158523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.158530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.158544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.168453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.168512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.168528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.168535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.168542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.168557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.186 [2024-12-06 15:45:41.178506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.186 [2024-12-06 15:45:41.178578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.186 [2024-12-06 15:45:41.178591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.186 [2024-12-06 15:45:41.178598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.186 [2024-12-06 15:45:41.178604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.186 [2024-12-06 15:45:41.178619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.186 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.188517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.188574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.188587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.188595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.188601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.188615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.198546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.198618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.198632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.198639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.198645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.198660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.208628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.208685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.208698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.208709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.208715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.208730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.218527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.218589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.218603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.218610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.218616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.218631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.228613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.228665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.228679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.228686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.228692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.228708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.238676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.238730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.238744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.238751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.238757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.238772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.248680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.248739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.248753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.248760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.248766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.248785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.258712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.258801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.258816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.258823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.258829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.258844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.268726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.268778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.268792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.268799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.268805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.268820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.278743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.278828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.278842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.443 [2024-12-06 15:45:41.278849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.443 [2024-12-06 15:45:41.278855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.443 [2024-12-06 15:45:41.278870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.443 qpair failed and we were unable to recover it. 00:28:35.443 [2024-12-06 15:45:41.288780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.443 [2024-12-06 15:45:41.288846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.443 [2024-12-06 15:45:41.288859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.288866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.288872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.288887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.298806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.298867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.298882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.298889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.298896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.298911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.308834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.308930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.308944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.308951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.308959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.308975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.318873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.318926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.318940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.318947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.318953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.318968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.328894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.328946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.328960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.328968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.328974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.328989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.338920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.338998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.339012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.339022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.339029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.339043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.348942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.348996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.349010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.349018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.349024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.349038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.358972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.359036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.359050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.359057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.359064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.359078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.369025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.369086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.369100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.369107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.369114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.369129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.379086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.379146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.379159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.379166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.379172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.379190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.389125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.389179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.389194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.389201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.389208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.389223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.399086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.399140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.399154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.399161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.399168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.399183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.409155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.409214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.409228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.409234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.409241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.409256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.419207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.419266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.419280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.419287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.419294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.419309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.444 [2024-12-06 15:45:41.429217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.444 [2024-12-06 15:45:41.429277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.444 [2024-12-06 15:45:41.429292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.444 [2024-12-06 15:45:41.429299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.444 [2024-12-06 15:45:41.429305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.444 [2024-12-06 15:45:41.429320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.444 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.439271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.439326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.439340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.439347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.439353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.439373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.449281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.449339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.449353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.449360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.449370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.449385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.459290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.459383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.459399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.459406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.459412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.459428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.469268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.469323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.469340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.469347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.469353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.469371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.479346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.479400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.479414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.479421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.479428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.479442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.489351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.489444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.489457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.489465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.489471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.489486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.499381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.499435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.499449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.499456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.499462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.499477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.509409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.509499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.509513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.509521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.509530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.509545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.519404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.519491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.519505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.702 [2024-12-06 15:45:41.519512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.702 [2024-12-06 15:45:41.519518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.702 [2024-12-06 15:45:41.519533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.702 qpair failed and we were unable to recover it. 00:28:35.702 [2024-12-06 15:45:41.529465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.702 [2024-12-06 15:45:41.529521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.702 [2024-12-06 15:45:41.529534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.529541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.529547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.529563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.539500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.539554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.539567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.539574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.539580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.539595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.549510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.549563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.549577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.549584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.549590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.549605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.559564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.559622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.559636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.559644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.559653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.559667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.569598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.569671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.569685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.569692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.569699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.569714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.579609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.579664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.579677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.579684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.579690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.579704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.589638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.589689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.589702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.589709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.589715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.589730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.599689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.599743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.599761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.599768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.599774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.599790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.609701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.609757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.609771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.609778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.609784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.609799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.619725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.619787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.619800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.619807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.619813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.619828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.629749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.629804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.629818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.629825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.629831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.629846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.639822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.639876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.639889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.639896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.639906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.639921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.649810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.649867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.649881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.649888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.703 [2024-12-06 15:45:41.649894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.703 [2024-12-06 15:45:41.649909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.703 qpair failed and we were unable to recover it. 00:28:35.703 [2024-12-06 15:45:41.659833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.703 [2024-12-06 15:45:41.659890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.703 [2024-12-06 15:45:41.659903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.703 [2024-12-06 15:45:41.659910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.704 [2024-12-06 15:45:41.659917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.704 [2024-12-06 15:45:41.659931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.704 qpair failed and we were unable to recover it. 00:28:35.704 [2024-12-06 15:45:41.669874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.704 [2024-12-06 15:45:41.669926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.704 [2024-12-06 15:45:41.669941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.704 [2024-12-06 15:45:41.669948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.704 [2024-12-06 15:45:41.669955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.704 [2024-12-06 15:45:41.669970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.704 qpair failed and we were unable to recover it. 00:28:35.704 [2024-12-06 15:45:41.679924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.704 [2024-12-06 15:45:41.679980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.704 [2024-12-06 15:45:41.679994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.704 [2024-12-06 15:45:41.680001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.704 [2024-12-06 15:45:41.680008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.704 [2024-12-06 15:45:41.680022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.704 qpair failed and we were unable to recover it. 00:28:35.704 [2024-12-06 15:45:41.689936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.704 [2024-12-06 15:45:41.689992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.704 [2024-12-06 15:45:41.690006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.704 [2024-12-06 15:45:41.690013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.704 [2024-12-06 15:45:41.690020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.704 [2024-12-06 15:45:41.690034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.704 qpair failed and we were unable to recover it. 00:28:35.961 [2024-12-06 15:45:41.699967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.961 [2024-12-06 15:45:41.700031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.961 [2024-12-06 15:45:41.700045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.961 [2024-12-06 15:45:41.700052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.961 [2024-12-06 15:45:41.700058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.961 [2024-12-06 15:45:41.700073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.961 qpair failed and we were unable to recover it. 00:28:35.961 [2024-12-06 15:45:41.709993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.961 [2024-12-06 15:45:41.710076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.961 [2024-12-06 15:45:41.710090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.961 [2024-12-06 15:45:41.710098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.961 [2024-12-06 15:45:41.710105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.961 [2024-12-06 15:45:41.710120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.961 qpair failed and we were unable to recover it. 00:28:35.961 [2024-12-06 15:45:41.720024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.961 [2024-12-06 15:45:41.720079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.961 [2024-12-06 15:45:41.720092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.961 [2024-12-06 15:45:41.720099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.961 [2024-12-06 15:45:41.720106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.961 [2024-12-06 15:45:41.720120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.961 qpair failed and we were unable to recover it. 00:28:35.961 [2024-12-06 15:45:41.730059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.961 [2024-12-06 15:45:41.730117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.961 [2024-12-06 15:45:41.730133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.961 [2024-12-06 15:45:41.730140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.961 [2024-12-06 15:45:41.730146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.961 [2024-12-06 15:45:41.730161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.961 qpair failed and we were unable to recover it. 00:28:35.961 [2024-12-06 15:45:41.740041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.961 [2024-12-06 15:45:41.740134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.961 [2024-12-06 15:45:41.740147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.961 [2024-12-06 15:45:41.740154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.961 [2024-12-06 15:45:41.740160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.961 [2024-12-06 15:45:41.740175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.961 qpair failed and we were unable to recover it. 00:28:35.961 [2024-12-06 15:45:41.750124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.750196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.750210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.750217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.750223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.750238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.760130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.760195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.760209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.760216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.760223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.760238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.770201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.770259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.770272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.770283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.770289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.770304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.780188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.780245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.780258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.780265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.780272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.780287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.790219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.790273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.790287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.790294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.790300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.790315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.800242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.800297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.800311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.800318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.800325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.800340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.810282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.810337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.810351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.810358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.810364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.810386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.820299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.820356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.820374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.820381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.820388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.820403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.830327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.830381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.830395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.830402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.830409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.830424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.840359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.840413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.840427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.840435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.840441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.840457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.850403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.850460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.850475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.850482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.850489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.850504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.860423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.860484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.860497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.860504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.860510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.860525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.870448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.962 [2024-12-06 15:45:41.870502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.962 [2024-12-06 15:45:41.870516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.962 [2024-12-06 15:45:41.870522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.962 [2024-12-06 15:45:41.870529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.962 [2024-12-06 15:45:41.870543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.962 qpair failed and we were unable to recover it. 00:28:35.962 [2024-12-06 15:45:41.880474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.880531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.880546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.880553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.880559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.880574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:35.963 [2024-12-06 15:45:41.890527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.890583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.890596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.890604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.890611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.890626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:35.963 [2024-12-06 15:45:41.900561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.900632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.900646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.900656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.900662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.900677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:35.963 [2024-12-06 15:45:41.910569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.910621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.910636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.910643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.910649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.910664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:35.963 [2024-12-06 15:45:41.920597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.920652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.920665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.920673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.920679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.920694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:35.963 [2024-12-06 15:45:41.930628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.930684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.930697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.930705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.930711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.930726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:35.963 [2024-12-06 15:45:41.940669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.940739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.940753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.940759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.940766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.940784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:35.963 [2024-12-06 15:45:41.950678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.963 [2024-12-06 15:45:41.950731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.963 [2024-12-06 15:45:41.950744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.963 [2024-12-06 15:45:41.950751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.963 [2024-12-06 15:45:41.950757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:35.963 [2024-12-06 15:45:41.950772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:35.963 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:41.960748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:41.960808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:41.960822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:41.960829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:41.960836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:41.960850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:41.970765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:41.970822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:41.970836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:41.970842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:41.970849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:41.970864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:41.980815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:41.980871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:41.980884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:41.980891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:41.980897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:41.980913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:41.990790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:41.990842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:41.990855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:41.990863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:41.990869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:41.990884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.000829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.000880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.000893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:42.000900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:42.000907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:42.000922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.010890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.010944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.010958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:42.010966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:42.010971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:42.010986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.020881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.020941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.020954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:42.020961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:42.020968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:42.020982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.030924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.030991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.031008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:42.031015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:42.031023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:42.031039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.040944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.041029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.041043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:42.041050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:42.041056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:42.041071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.050966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.051023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.051037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:42.051044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:42.051050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:42.051065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.060983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.061041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.061057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.221 [2024-12-06 15:45:42.061065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.221 [2024-12-06 15:45:42.061073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.221 [2024-12-06 15:45:42.061088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.221 qpair failed and we were unable to recover it. 00:28:36.221 [2024-12-06 15:45:42.071022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.221 [2024-12-06 15:45:42.071081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.221 [2024-12-06 15:45:42.071095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.071102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.071111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.071127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.081073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.081128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.081143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.081150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.081156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.081171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.091058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.091125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.091140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.091148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.091155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.091170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.101130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.101182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.101196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.101203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.101210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.101224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.111137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.111186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.111200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.111206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.111213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.111227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.121158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.121214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.121227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.121235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.121241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.121256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.131118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.131173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.131186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.131193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.131200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.131215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.141196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.141285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.141300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.141307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.141313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.141327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.151232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.151287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.151301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.151308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.151314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.151329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.161256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.161311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.161328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.161335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.161341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.161355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.171324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.171392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.171406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.171413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.171419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.171434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.181346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.181403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.181416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.181424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.181430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.181446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.222 qpair failed and we were unable to recover it. 00:28:36.222 [2024-12-06 15:45:42.191341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.222 [2024-12-06 15:45:42.191397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.222 [2024-12-06 15:45:42.191411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.222 [2024-12-06 15:45:42.191418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.222 [2024-12-06 15:45:42.191424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.222 [2024-12-06 15:45:42.191438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.223 qpair failed and we were unable to recover it. 00:28:36.223 [2024-12-06 15:45:42.201384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.223 [2024-12-06 15:45:42.201436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.223 [2024-12-06 15:45:42.201450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.223 [2024-12-06 15:45:42.201457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.223 [2024-12-06 15:45:42.201467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.223 [2024-12-06 15:45:42.201482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.223 qpair failed and we were unable to recover it. 00:28:36.223 [2024-12-06 15:45:42.211354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.223 [2024-12-06 15:45:42.211411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.223 [2024-12-06 15:45:42.211425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.223 [2024-12-06 15:45:42.211432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.223 [2024-12-06 15:45:42.211438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.223 [2024-12-06 15:45:42.211454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.223 qpair failed and we were unable to recover it. 00:28:36.480 [2024-12-06 15:45:42.221490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.480 [2024-12-06 15:45:42.221557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.221570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.221577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.221584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.221598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.231559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.231624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.231639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.231647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.231652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.231666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.241482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.241533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.241546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.241553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.241559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.241574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.251595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.251654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.251668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.251675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.251681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.251696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.261540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.261598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.261612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.261619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.261625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.261640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.271600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.271651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.271665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.271672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.271678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.271692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.281613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.281667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.281679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.281686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.281693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.281708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.291661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.291718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.291738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.291746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.291752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.291767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.301612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.301671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.301685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.301693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.301699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.301713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.311645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.311728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.311744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.311753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.311761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.311777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.321673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.321729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.321743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.321750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.321757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.321772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.331711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.331770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.331785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.331794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.331801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.331816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.481 [2024-12-06 15:45:42.341797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.481 [2024-12-06 15:45:42.341856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.481 [2024-12-06 15:45:42.341876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.481 [2024-12-06 15:45:42.341883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.481 [2024-12-06 15:45:42.341890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.481 [2024-12-06 15:45:42.341909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.481 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.351812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.351865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.351879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.351886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.351893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.351908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.361770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.361841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.361854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.361861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.361868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.361883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.371914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.371969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.371983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.371990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.371996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.372015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.381844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.381901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.381915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.381923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.381929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.381944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.391974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.392061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.392074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.392082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.392088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.392103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.402032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.402112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.402126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.402133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.402140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.402154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.411921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.411978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.411992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.411999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.412005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.412021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.422039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.422098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.422111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.422118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.422124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.422139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.431971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.432029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.432042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.432049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.432056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.432070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.442023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.442076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.442090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.442097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.442103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.442117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.452035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.452091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.452105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.452112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.452118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.452133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.462184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.462251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.462264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.462275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.462281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.462296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.482 [2024-12-06 15:45:42.472162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.482 [2024-12-06 15:45:42.472232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.482 [2024-12-06 15:45:42.472246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.482 [2024-12-06 15:45:42.472254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.482 [2024-12-06 15:45:42.472260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.482 [2024-12-06 15:45:42.472275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.482 qpair failed and we were unable to recover it. 00:28:36.741 [2024-12-06 15:45:42.482230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.741 [2024-12-06 15:45:42.482285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.741 [2024-12-06 15:45:42.482299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.741 [2024-12-06 15:45:42.482306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.741 [2024-12-06 15:45:42.482313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.741 [2024-12-06 15:45:42.482328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.741 qpair failed and we were unable to recover it. 00:28:36.741 [2024-12-06 15:45:42.492279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.741 [2024-12-06 15:45:42.492389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.741 [2024-12-06 15:45:42.492403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.741 [2024-12-06 15:45:42.492410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.741 [2024-12-06 15:45:42.492416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.741 [2024-12-06 15:45:42.492431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.741 qpair failed and we were unable to recover it. 00:28:36.741 [2024-12-06 15:45:42.502284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.741 [2024-12-06 15:45:42.502340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.741 [2024-12-06 15:45:42.502355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.741 [2024-12-06 15:45:42.502362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.741 [2024-12-06 15:45:42.502379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.741 [2024-12-06 15:45:42.502398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.741 qpair failed and we were unable to recover it. 00:28:36.741 [2024-12-06 15:45:42.512292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.741 [2024-12-06 15:45:42.512347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.741 [2024-12-06 15:45:42.512361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.741 [2024-12-06 15:45:42.512372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.741 [2024-12-06 15:45:42.512379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.741 [2024-12-06 15:45:42.512394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.741 qpair failed and we were unable to recover it. 00:28:36.741 [2024-12-06 15:45:42.522336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.741 [2024-12-06 15:45:42.522397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.741 [2024-12-06 15:45:42.522410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.741 [2024-12-06 15:45:42.522417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.741 [2024-12-06 15:45:42.522424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.741 [2024-12-06 15:45:42.522439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.741 qpair failed and we were unable to recover it. 00:28:36.741 [2024-12-06 15:45:42.532393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.741 [2024-12-06 15:45:42.532448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.741 [2024-12-06 15:45:42.532461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.741 [2024-12-06 15:45:42.532468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.741 [2024-12-06 15:45:42.532475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.741 [2024-12-06 15:45:42.532489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.741 qpair failed and we were unable to recover it. 00:28:36.741 [2024-12-06 15:45:42.542374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.741 [2024-12-06 15:45:42.542447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.741 [2024-12-06 15:45:42.542460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.741 [2024-12-06 15:45:42.542468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.741 [2024-12-06 15:45:42.542474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.741 [2024-12-06 15:45:42.542488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.552399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.552461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.552475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.552482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.552489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.552504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.562451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.562506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.562521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.562528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.562536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.562553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.572462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.572533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.572548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.572555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.572562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.572577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.582483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.582541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.582554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.582561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.582568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.582583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.592474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.592575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.592592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.592600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.592607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.592621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.602549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.602602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.602616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.602623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.602630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.602645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.612579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.612637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.612650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.612657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.612664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.612679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.622649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.622734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.622748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.622755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.622762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.622777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.632626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.632683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.632695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.632702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.632711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.632726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.642701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.642786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.642800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.642807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.642813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.642828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.652693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.652747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.652761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.652768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.652775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.652789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.662713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.662768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.662781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.662788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.662794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.662809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.742 [2024-12-06 15:45:42.672742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.742 [2024-12-06 15:45:42.672796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.742 [2024-12-06 15:45:42.672809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.742 [2024-12-06 15:45:42.672816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.742 [2024-12-06 15:45:42.672823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.742 [2024-12-06 15:45:42.672837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.742 qpair failed and we were unable to recover it. 00:28:36.743 [2024-12-06 15:45:42.682797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.743 [2024-12-06 15:45:42.682851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.743 [2024-12-06 15:45:42.682865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.743 [2024-12-06 15:45:42.682872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.743 [2024-12-06 15:45:42.682879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.743 [2024-12-06 15:45:42.682894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.743 qpair failed and we were unable to recover it. 00:28:36.743 [2024-12-06 15:45:42.692799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.743 [2024-12-06 15:45:42.692870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.743 [2024-12-06 15:45:42.692883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.743 [2024-12-06 15:45:42.692890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.743 [2024-12-06 15:45:42.692896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.743 [2024-12-06 15:45:42.692912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.743 qpair failed and we were unable to recover it. 00:28:36.743 [2024-12-06 15:45:42.702820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.743 [2024-12-06 15:45:42.702879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.743 [2024-12-06 15:45:42.702893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.743 [2024-12-06 15:45:42.702900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.743 [2024-12-06 15:45:42.702906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.743 [2024-12-06 15:45:42.702922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.743 qpair failed and we were unable to recover it. 00:28:36.743 [2024-12-06 15:45:42.712909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.743 [2024-12-06 15:45:42.712961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.743 [2024-12-06 15:45:42.712976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.743 [2024-12-06 15:45:42.712983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.743 [2024-12-06 15:45:42.712990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.743 [2024-12-06 15:45:42.713006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.743 qpair failed and we were unable to recover it. 00:28:36.743 [2024-12-06 15:45:42.722887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.743 [2024-12-06 15:45:42.722939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.743 [2024-12-06 15:45:42.722956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.743 [2024-12-06 15:45:42.722963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.743 [2024-12-06 15:45:42.722969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.743 [2024-12-06 15:45:42.722984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.743 qpair failed and we were unable to recover it. 00:28:36.743 [2024-12-06 15:45:42.732941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.743 [2024-12-06 15:45:42.733000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.743 [2024-12-06 15:45:42.733014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.743 [2024-12-06 15:45:42.733021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.743 [2024-12-06 15:45:42.733027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:36.743 [2024-12-06 15:45:42.733042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.743 qpair failed and we were unable to recover it. 00:28:37.002 [2024-12-06 15:45:42.742974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.002 [2024-12-06 15:45:42.743049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.002 [2024-12-06 15:45:42.743063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.002 [2024-12-06 15:45:42.743070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.002 [2024-12-06 15:45:42.743079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.002 [2024-12-06 15:45:42.743094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.002 qpair failed and we were unable to recover it. 00:28:37.002 [2024-12-06 15:45:42.752966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.002 [2024-12-06 15:45:42.753038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.002 [2024-12-06 15:45:42.753051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.002 [2024-12-06 15:45:42.753058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.002 [2024-12-06 15:45:42.753065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.002 [2024-12-06 15:45:42.753079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.002 qpair failed and we were unable to recover it. 00:28:37.002 [2024-12-06 15:45:42.762999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.002 [2024-12-06 15:45:42.763050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.002 [2024-12-06 15:45:42.763063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.002 [2024-12-06 15:45:42.763070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.002 [2024-12-06 15:45:42.763080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.002 [2024-12-06 15:45:42.763095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.002 qpair failed and we were unable to recover it. 00:28:37.002 [2024-12-06 15:45:42.772962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.002 [2024-12-06 15:45:42.773018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.002 [2024-12-06 15:45:42.773031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.002 [2024-12-06 15:45:42.773038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.002 [2024-12-06 15:45:42.773045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.002 [2024-12-06 15:45:42.773059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.002 qpair failed and we were unable to recover it. 00:28:37.002 [2024-12-06 15:45:42.783055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.783129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.783142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.783150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.783157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.783172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.793082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.793135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.793148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.793155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.793161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.793176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.803100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.803153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.803168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.803174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.803181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.803196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.813127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.813202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.813217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.813225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.813232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.813247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.823158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.823217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.823231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.823238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.823244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.823259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.833179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.833229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.833243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.833250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.833256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.833271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.843216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.843283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.843296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.843303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.843310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.843324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.853278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.853381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.853396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.853403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.853410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.853425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.863229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.003 [2024-12-06 15:45:42.863289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.003 [2024-12-06 15:45:42.863303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.003 [2024-12-06 15:45:42.863311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.003 [2024-12-06 15:45:42.863317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.003 [2024-12-06 15:45:42.863332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.003 qpair failed and we were unable to recover it. 00:28:37.003 [2024-12-06 15:45:42.873305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.873362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.873379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.873387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.873393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.873408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.883330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.883413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.883427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.883434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.883441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.883456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.893402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.893458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.893471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.893482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.893488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.893503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.903411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.903468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.903482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.903490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.903496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.903512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.913426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.913482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.913497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.913505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.913511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.913526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.923445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.923496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.923510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.923518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.923524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.923539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.933460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.933517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.933531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.933539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.933546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.933564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.943505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.943556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.943570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.943577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.943583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.943598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.953518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.953572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.953586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.004 [2024-12-06 15:45:42.953593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.004 [2024-12-06 15:45:42.953599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.004 [2024-12-06 15:45:42.953613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.004 qpair failed and we were unable to recover it. 00:28:37.004 [2024-12-06 15:45:42.963565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.004 [2024-12-06 15:45:42.963616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.004 [2024-12-06 15:45:42.963629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.005 [2024-12-06 15:45:42.963636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.005 [2024-12-06 15:45:42.963642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.005 [2024-12-06 15:45:42.963657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.005 qpair failed and we were unable to recover it. 00:28:37.005 [2024-12-06 15:45:42.973605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.005 [2024-12-06 15:45:42.973660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.005 [2024-12-06 15:45:42.973674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.005 [2024-12-06 15:45:42.973681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.005 [2024-12-06 15:45:42.973687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.005 [2024-12-06 15:45:42.973702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.005 qpair failed and we were unable to recover it. 00:28:37.005 [2024-12-06 15:45:42.983658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.005 [2024-12-06 15:45:42.983723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.005 [2024-12-06 15:45:42.983736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.005 [2024-12-06 15:45:42.983744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.005 [2024-12-06 15:45:42.983750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.005 [2024-12-06 15:45:42.983764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.005 qpair failed and we were unable to recover it. 00:28:37.005 [2024-12-06 15:45:42.993690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.005 [2024-12-06 15:45:42.993749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.005 [2024-12-06 15:45:42.993762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.005 [2024-12-06 15:45:42.993769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.005 [2024-12-06 15:45:42.993775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.005 [2024-12-06 15:45:42.993790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.005 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.003705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.003759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.003772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.003779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.003785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.003801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.013726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.013788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.013802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.013809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.013815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.013830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.023794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.023887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.023900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.023910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.023917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.023931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.033686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.033736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.033750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.033757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.033763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.033778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.043790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.043839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.043852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.043859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.043865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.043879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.053878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.053934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.053950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.053958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.053964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.053980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.063859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.063914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.063929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.063936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.063942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.063962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.073806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.073861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.073876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.073883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.073889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.073904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.265 [2024-12-06 15:45:43.083900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.265 [2024-12-06 15:45:43.083954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.265 [2024-12-06 15:45:43.083968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.265 [2024-12-06 15:45:43.083975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.265 [2024-12-06 15:45:43.083982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.265 [2024-12-06 15:45:43.083996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.265 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.093989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.094060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.094073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.094080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.094087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.094102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.103997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.104052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.104067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.104074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.104080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.104095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.113980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.114057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.114072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.114080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.114086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.114101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.124006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.124060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.124075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.124082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.124088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.124104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.134048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.134103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.134117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.134124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.134130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.134145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.144092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.144145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.144159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.144166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.144172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.144187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.154097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.154148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.154165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.154172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.154178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.154194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.164133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.164188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.164202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.164209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.164216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.164230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.174162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.266 [2024-12-06 15:45:43.174218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.266 [2024-12-06 15:45:43.174231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.266 [2024-12-06 15:45:43.174238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.266 [2024-12-06 15:45:43.174245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.266 [2024-12-06 15:45:43.174260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.266 qpair failed and we were unable to recover it. 00:28:37.266 [2024-12-06 15:45:43.184208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.184270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.184283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.184291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.184297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.184312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.267 [2024-12-06 15:45:43.194220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.194293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.194307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.194315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.194324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.194339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.267 [2024-12-06 15:45:43.204283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.204346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.204360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.204370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.204377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.204393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.267 [2024-12-06 15:45:43.214283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.214336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.214349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.214356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.214362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.214380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.267 [2024-12-06 15:45:43.224305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.224353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.224369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.224376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.224383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.224397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.267 [2024-12-06 15:45:43.234303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.234358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.234374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.234382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.234388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.234403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.267 [2024-12-06 15:45:43.244365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.244420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.244434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.244440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.244447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.244462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.267 [2024-12-06 15:45:43.254440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.267 [2024-12-06 15:45:43.254503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.267 [2024-12-06 15:45:43.254516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.267 [2024-12-06 15:45:43.254523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.267 [2024-12-06 15:45:43.254529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.267 [2024-12-06 15:45:43.254544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.267 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.264451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.264511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.264524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.264531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.264538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.264552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.274472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.274528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.274541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.274548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.274554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.274570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.284416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.284472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.284489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.284496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.284502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.284517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.294588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.294675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.294688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.294696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.294702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.294716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.304554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.304615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.304629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.304637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.304643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.304659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.314577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.314633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.314648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.314657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.314663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.314679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.324606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.324657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.324670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.324677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.324686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.324701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.334630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.334700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.334715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.334722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.526 [2024-12-06 15:45:43.334728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.526 [2024-12-06 15:45:43.334743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-12-06 15:45:43.344649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.526 [2024-12-06 15:45:43.344749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.526 [2024-12-06 15:45:43.344766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.526 [2024-12-06 15:45:43.344773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.344780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.344795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.354741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.354797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.354811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.354818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.354824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.354839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.364735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.364789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.364802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.364809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.364815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.364830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.374759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.374818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.374832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.374839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.374846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.374861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.384763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.384819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.384832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.384839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.384845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.384860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.394788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.394844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.394858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.394866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.394872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.394887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.404811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.404864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.404877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.404884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.404891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.404906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.414851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.414915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.414930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.414938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.414944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.414958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.424877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.424930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.424943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.424950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.424956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.424971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.434896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.434948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.434961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.434968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.434975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.434990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.444920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.444974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.444987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.444994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.445001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.445015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.454957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.455013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.455026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.455037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.455043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.455058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.464961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.465036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.465049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.465057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.527 [2024-12-06 15:45:43.465064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.527 [2024-12-06 15:45:43.465078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-12-06 15:45:43.475048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.527 [2024-12-06 15:45:43.475104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.527 [2024-12-06 15:45:43.475118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.527 [2024-12-06 15:45:43.475125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.528 [2024-12-06 15:45:43.475131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.528 [2024-12-06 15:45:43.475146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-12-06 15:45:43.485067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.528 [2024-12-06 15:45:43.485122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.528 [2024-12-06 15:45:43.485136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.528 [2024-12-06 15:45:43.485143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.528 [2024-12-06 15:45:43.485152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.528 [2024-12-06 15:45:43.485167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-12-06 15:45:43.495067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.528 [2024-12-06 15:45:43.495125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.528 [2024-12-06 15:45:43.495139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.528 [2024-12-06 15:45:43.495146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.528 [2024-12-06 15:45:43.495153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.528 [2024-12-06 15:45:43.495171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-12-06 15:45:43.505078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.528 [2024-12-06 15:45:43.505158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.528 [2024-12-06 15:45:43.505172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.528 [2024-12-06 15:45:43.505179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.528 [2024-12-06 15:45:43.505185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.528 [2024-12-06 15:45:43.505200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-12-06 15:45:43.515046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.528 [2024-12-06 15:45:43.515102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.528 [2024-12-06 15:45:43.515117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.528 [2024-12-06 15:45:43.515123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.528 [2024-12-06 15:45:43.515130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.528 [2024-12-06 15:45:43.515145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.525197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.525258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.525272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.525280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.525286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.525300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.535239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.535298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.535312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.535319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.535326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.535340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.545160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.545219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.545235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.545242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.545248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.545263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.555191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.555239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.555253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.555260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.555267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.555282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.565321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.565385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.565400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.565407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.565413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.565428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.575298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.575353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.575370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.575378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.575385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.575400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.585258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.585317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.585334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.585341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.585347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.585363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.595365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.595436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.595450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.595457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.595463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.595479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.605403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.605457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.605470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.605477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.605483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.605498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.615345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.615411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.615426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.615434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.615440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.615455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.625372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.625427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.625441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.625449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.625455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.625473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.635390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.635488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.635502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.635509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.787 [2024-12-06 15:45:43.635515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.787 [2024-12-06 15:45:43.635530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.787 qpair failed and we were unable to recover it. 00:28:37.787 [2024-12-06 15:45:43.645474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.787 [2024-12-06 15:45:43.645522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.787 [2024-12-06 15:45:43.645535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.787 [2024-12-06 15:45:43.645542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.645549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.645563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.655547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.655639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.655652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.655659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.655666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.655681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.665491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.665544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.665557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.665564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.665571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.665585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.675505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.675566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.675580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.675587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.675594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.675609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.685596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.685672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.685686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.685693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.685699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.685714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.695602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.695657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.695670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.695678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.695685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.695702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.705602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.705659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.705673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.705680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.705686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.705700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.715714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.715768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.715785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.715792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.715798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.715813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.725714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.725767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.725780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.725787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.725793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.725808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.735663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.735721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.735734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.735741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.735747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.735761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.745707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.745766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.745780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.745787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.745793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.745809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.755800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.755855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.755868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.755875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.755884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.755899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.765758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.765812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.765825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.765832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.765838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.788 [2024-12-06 15:45:43.765852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.788 qpair failed and we were unable to recover it. 00:28:37.788 [2024-12-06 15:45:43.775787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.788 [2024-12-06 15:45:43.775885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.788 [2024-12-06 15:45:43.775898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.788 [2024-12-06 15:45:43.775905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.788 [2024-12-06 15:45:43.775911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:37.789 [2024-12-06 15:45:43.775926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:37.789 qpair failed and we were unable to recover it. 00:28:38.047 [2024-12-06 15:45:43.785923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.047 [2024-12-06 15:45:43.785988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.047 [2024-12-06 15:45:43.786002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.047 [2024-12-06 15:45:43.786009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.047 [2024-12-06 15:45:43.786015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.047 [2024-12-06 15:45:43.786030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.795955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.796012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.796025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.796032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.796038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.796053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.805937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.806007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.806021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.806028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.806035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.806050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.815906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.815966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.815982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.815990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.815997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.816012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.826005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.826062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.826076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.826084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.826090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.826105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.835941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.835997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.836011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.836019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.836025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.836040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.845984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.846037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.846055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.846062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.846068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.846083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.856118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.856191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.856206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.856213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.856220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.856234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.866125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.866201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.866215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.866222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.866229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.866244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.876080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.876140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.876154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.876161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.876167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.876183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.886146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.886200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.886214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.886224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.886230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.886245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.896208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.896264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.896277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.896284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.896290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.896305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.906232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.906288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.906303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.048 [2024-12-06 15:45:43.906310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.048 [2024-12-06 15:45:43.906316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.048 [2024-12-06 15:45:43.906331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.048 qpair failed and we were unable to recover it. 00:28:38.048 [2024-12-06 15:45:43.916247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.048 [2024-12-06 15:45:43.916301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.048 [2024-12-06 15:45:43.916315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.916322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.916328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.916343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.926274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.926328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.926342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.926349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.926355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.926374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.936308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.936379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.936393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.936400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.936406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.936421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.946327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.946384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.946398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.946405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.946411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.946426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.956376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.956432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.956446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.956454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.956460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.956475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.966417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.966475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.966489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.966496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.966503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.966517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.976423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.976487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.976501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.976508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.976515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.976530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.986434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.986493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.986507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.986514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.986520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.986536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:43.996469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:43.996525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:43.996539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:43.996547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:43.996553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:43.996568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:44.006506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:44.006555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:44.006570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:44.006577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:44.006584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:44.006599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:44.016534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:44.016594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:44.016608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:44.016618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:44.016624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:44.016639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:44.026571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:44.026647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:44.026661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:44.026668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:44.026674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:44.026689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.049 [2024-12-06 15:45:44.036603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.049 [2024-12-06 15:45:44.036659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.049 [2024-12-06 15:45:44.036672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.049 [2024-12-06 15:45:44.036679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.049 [2024-12-06 15:45:44.036686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.049 [2024-12-06 15:45:44.036701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.049 qpair failed and we were unable to recover it. 00:28:38.308 [2024-12-06 15:45:44.046653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.308 [2024-12-06 15:45:44.046717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.308 [2024-12-06 15:45:44.046731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.308 [2024-12-06 15:45:44.046738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.308 [2024-12-06 15:45:44.046744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.308 [2024-12-06 15:45:44.046759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.308 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.056759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.056839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.056854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.056861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.056867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.056887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.066646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.066706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.066721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.066728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.066735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.066752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.076710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.076761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.076775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.076782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.076789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.076805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.086764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.086817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.086830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.086837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.086844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.086860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.096799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.096857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.096870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.096878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.096885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.096900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.106805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.106882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.106896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.106903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.106909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.106924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.116813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.116916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.116931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.116938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.116946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.116961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.126857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.126913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.126926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.126933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.126940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.126955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.136930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.136989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.137001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.137008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.137015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.137030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.146936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.146993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.147010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.147017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.147023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.147037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.156938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.156992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.157007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.157014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.157021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.157036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.166968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.167025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.167040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.167047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.167054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.309 [2024-12-06 15:45:44.167069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.309 qpair failed and we were unable to recover it. 00:28:38.309 [2024-12-06 15:45:44.177000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.309 [2024-12-06 15:45:44.177056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.309 [2024-12-06 15:45:44.177069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.309 [2024-12-06 15:45:44.177076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.309 [2024-12-06 15:45:44.177082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.177098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.187074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.187128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.187142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.187149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.187156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.187174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.197098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.197162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.197176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.197183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.197190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.197205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.207006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.207067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.207080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.207088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.207095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.207110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.217166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.217224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.217238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.217245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.217252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.217266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.227185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.227296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.227310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.227317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.227324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.227339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.237210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.237263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.237277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.237284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.237290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.237305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.247273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.247361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.247379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.247386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.247392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.247407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.257228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.257284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.257298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.257305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.257312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.257327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.267255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.267310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.267324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.267331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.267338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.267353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.277308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.277362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.277383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.277390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.277397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.277412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.287227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.287280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.287294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.287301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.287309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.287323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.310 [2024-12-06 15:45:44.297356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.310 [2024-12-06 15:45:44.297433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.310 [2024-12-06 15:45:44.297447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.310 [2024-12-06 15:45:44.297454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.310 [2024-12-06 15:45:44.297461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.310 [2024-12-06 15:45:44.297476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.310 qpair failed and we were unable to recover it. 00:28:38.569 [2024-12-06 15:45:44.307400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.307472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.307486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.307493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.307500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.307515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.317418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.317478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.317494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.317503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.317513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.317528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.327432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.327485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.327498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.327505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.327513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.327527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.337463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.337533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.337548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.337555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.337561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.337577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.347490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.347544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.347558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.347565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.347571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.347586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.357499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.357556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.357568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.357575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.357582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.357596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.367545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.367600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.367614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.367622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.367629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.367644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.377578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.377650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.377666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.377674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.377680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.377696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.387590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.387647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.387661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.387668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.387675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.387690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.397615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.397670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.397684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.397691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.397698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.397713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.407642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.407692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.407709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.407716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.407723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.407738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.417603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.417663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.417676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.417683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.417690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.417705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.427686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.570 [2024-12-06 15:45:44.427738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.570 [2024-12-06 15:45:44.427752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.570 [2024-12-06 15:45:44.427759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.570 [2024-12-06 15:45:44.427765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.570 [2024-12-06 15:45:44.427780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.570 qpair failed and we were unable to recover it. 00:28:38.570 [2024-12-06 15:45:44.437714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.437770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.437785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.437792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.437798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.437813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.447742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.447797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.447810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.447820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.447827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.447841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.457774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.457830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.457843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.457850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.457857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.457871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.467806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.467864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.467877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.467885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.467892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.467906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.477825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.477878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.477892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.477899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.477906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.477921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.487844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.487916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.487930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.487937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.487943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.487958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.497917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.497976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.497990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.497998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.498004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.498020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.507917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.507977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.507991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.507998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.508004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.508020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.517945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.518001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.518014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.518022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.518028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.518043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.527979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.528037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.528051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.528058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.528064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.528079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.538009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.538067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.538080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.538088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.538094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.538109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.548047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.548104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.548118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.548124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.548131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.548145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.571 [2024-12-06 15:45:44.558116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.571 [2024-12-06 15:45:44.558168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.571 [2024-12-06 15:45:44.558181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.571 [2024-12-06 15:45:44.558188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.571 [2024-12-06 15:45:44.558195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.571 [2024-12-06 15:45:44.558210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.571 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.568126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.568200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.568214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.568221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.568227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.568242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.578153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.578221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.578237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.578249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.578256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.578272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.588191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.588252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.588265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.588273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.588280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.588295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.598098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.598167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.598181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.598189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.598196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.598210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.608189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.608276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.608290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.608298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.608305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.608321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.618250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.618352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.618365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.618376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.618382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.618400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.628274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.628328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.628341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.628348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.628355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.628374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.638345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.638406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.638420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.638427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.638434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.638449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.648322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.648376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.648390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.648397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.648404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.648419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.658382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.658452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.658465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.658472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.658479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.830 [2024-12-06 15:45:44.658494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.830 qpair failed and we were unable to recover it. 00:28:38.830 [2024-12-06 15:45:44.668374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.830 [2024-12-06 15:45:44.668432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.830 [2024-12-06 15:45:44.668446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.830 [2024-12-06 15:45:44.668453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.830 [2024-12-06 15:45:44.668460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.668474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.678432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.678486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.678500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.678506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.678513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.678529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.688442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.688507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.688520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.688528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.688534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.688549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.698471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.698528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.698541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.698548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.698554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.698569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.708563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.708667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.708685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.708692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.708699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.708714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.718514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.718589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.718603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.718610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.718616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.718631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.728559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.728632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.728646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.728653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.728659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.728673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.738559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.738625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.738638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.738646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.738652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.738668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.748609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.748660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.748673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.748681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.748690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.748705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.758633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.758687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.758700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.758707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.758714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.758728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.768662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.768712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.768726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.768733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.768739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.768754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.778696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.778749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.778763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.778770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.778776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.778792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.788717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.788772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.788787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.788794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.831 [2024-12-06 15:45:44.788801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.831 [2024-12-06 15:45:44.788816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.831 qpair failed and we were unable to recover it. 00:28:38.831 [2024-12-06 15:45:44.798744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.831 [2024-12-06 15:45:44.798792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.831 [2024-12-06 15:45:44.798806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.831 [2024-12-06 15:45:44.798813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.832 [2024-12-06 15:45:44.798819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.832 [2024-12-06 15:45:44.798834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.832 qpair failed and we were unable to recover it. 00:28:38.832 [2024-12-06 15:45:44.808765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.832 [2024-12-06 15:45:44.808813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.832 [2024-12-06 15:45:44.808826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.832 [2024-12-06 15:45:44.808833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.832 [2024-12-06 15:45:44.808839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.832 [2024-12-06 15:45:44.808854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.832 qpair failed and we were unable to recover it. 00:28:38.832 [2024-12-06 15:45:44.818742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.832 [2024-12-06 15:45:44.818810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.832 [2024-12-06 15:45:44.818824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.832 [2024-12-06 15:45:44.818831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.832 [2024-12-06 15:45:44.818837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:38.832 [2024-12-06 15:45:44.818852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.832 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.828848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.828902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.828917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.828924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.828931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.828946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.838954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.839015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.839031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.839039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.839045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.839060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.848857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.848924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.848938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.848945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.848951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.848966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.858986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.859042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.859056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.859063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.859070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.859084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.868974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.869048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.869061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.869069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.869075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.869090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.878960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.879012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.879026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.879033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.879043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.879058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.889013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.889087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.889100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.889107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.889114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.889129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.899018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.899092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.899106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.899114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.899120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.899135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.909056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.090 [2024-12-06 15:45:44.909116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.090 [2024-12-06 15:45:44.909130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.090 [2024-12-06 15:45:44.909137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.090 [2024-12-06 15:45:44.909144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.090 [2024-12-06 15:45:44.909159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.090 qpair failed and we were unable to recover it. 00:28:39.090 [2024-12-06 15:45:44.919104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.919161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.919176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.919184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.919190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.919205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.929096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.929148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.929162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.929170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.929177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.929191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.939144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.939203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.939216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.939223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.939231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.939246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.949162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.949224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.949238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.949245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.949252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.949267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.959189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.959244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.959258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.959266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.959273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.959287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.969220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.969273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.969290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.969297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.969304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.969319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.979264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.979327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.979342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.979350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.979356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.979377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.989232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.989291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.989304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.989312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.989318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.989333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:44.999319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:44.999378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:44.999393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:44.999401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:44.999408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:44.999423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:45.009331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:45.009389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:45.009404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:45.009414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:45.009420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:45.009435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:45.019388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:45.019443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:45.019457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:45.019464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:45.019470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:45.019485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:45.029443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:45.029545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:45.029559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:45.029566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:45.029572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:45.029588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:45.039435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:45.039489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.091 [2024-12-06 15:45:45.039503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.091 [2024-12-06 15:45:45.039511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.091 [2024-12-06 15:45:45.039518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.091 [2024-12-06 15:45:45.039533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.091 qpair failed and we were unable to recover it. 00:28:39.091 [2024-12-06 15:45:45.049424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.091 [2024-12-06 15:45:45.049497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.092 [2024-12-06 15:45:45.049511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.092 [2024-12-06 15:45:45.049519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.092 [2024-12-06 15:45:45.049525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.092 [2024-12-06 15:45:45.049540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.092 qpair failed and we were unable to recover it. 00:28:39.092 [2024-12-06 15:45:45.059536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.092 [2024-12-06 15:45:45.059617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.092 [2024-12-06 15:45:45.059632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.092 [2024-12-06 15:45:45.059639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.092 [2024-12-06 15:45:45.059645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.092 [2024-12-06 15:45:45.059661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.092 qpair failed and we were unable to recover it. 00:28:39.092 [2024-12-06 15:45:45.069509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.092 [2024-12-06 15:45:45.069588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.092 [2024-12-06 15:45:45.069602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.092 [2024-12-06 15:45:45.069610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.092 [2024-12-06 15:45:45.069616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.092 [2024-12-06 15:45:45.069632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.092 qpair failed and we were unable to recover it. 00:28:39.092 [2024-12-06 15:45:45.079474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.092 [2024-12-06 15:45:45.079529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.092 [2024-12-06 15:45:45.079544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.092 [2024-12-06 15:45:45.079551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.092 [2024-12-06 15:45:45.079558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.092 [2024-12-06 15:45:45.079573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.092 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.089625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.089729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.089743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.089751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.089758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.089773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.099708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.099786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.099800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.099808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.099815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.099831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.109644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.109702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.109716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.109723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.109730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.109746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.119705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.119779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.119793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.119801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.119808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.119824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.129672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.129727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.129741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.129748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.129754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.129769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.139740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.139797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.139810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.139821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.139827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.139842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.149769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.149847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.149861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.149869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.149875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.149889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.159700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.159748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.159762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.348 [2024-12-06 15:45:45.159769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.348 [2024-12-06 15:45:45.159775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.348 [2024-12-06 15:45:45.159791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.348 qpair failed and we were unable to recover it. 00:28:39.348 [2024-12-06 15:45:45.169731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.348 [2024-12-06 15:45:45.169781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.348 [2024-12-06 15:45:45.169794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.169801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.169807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.349 [2024-12-06 15:45:45.169823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.179817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.179869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.179883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.179890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.179896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.349 [2024-12-06 15:45:45.179915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.189874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.189930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.189943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.189950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.189957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.349 [2024-12-06 15:45:45.189972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.199822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.199878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.199891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.199898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.199905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.349 [2024-12-06 15:45:45.199919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.209843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.209899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.209913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.209921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.209928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.349 [2024-12-06 15:45:45.209943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.219986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.220043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.220056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.220063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.220070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.349 [2024-12-06 15:45:45.220085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.229953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.230008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.230022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.230029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.230036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e8c000b90 00:28:39.349 [2024-12-06 15:45:45.230050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.240055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.240177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.240231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.240257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.240279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e84000b90 00:28:39.349 [2024-12-06 15:45:45.240331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.250031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.250102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.250129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.250144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.250157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e84000b90 00:28:39.349 [2024-12-06 15:45:45.250189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.260078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.260193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.260248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.260273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.260293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e80000b90 00:28:39.349 [2024-12-06 15:45:45.260344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.270093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.270167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.270201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.270217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.270230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6e80000b90 00:28:39.349 [2024-12-06 15:45:45.270261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.280131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.280233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.280293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.280319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.280341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2179be0 00:28:39.349 [2024-12-06 15:45:45.280412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.290123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.349 [2024-12-06 15:45:45.290209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.349 [2024-12-06 15:45:45.290238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.349 [2024-12-06 15:45:45.290252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.349 [2024-12-06 15:45:45.290266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2179be0 00:28:39.349 [2024-12-06 15:45:45.290296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.349 qpair failed and we were unable to recover it. 00:28:39.349 [2024-12-06 15:45:45.290427] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:39.349 A controller has encountered a failure and is being reset. 00:28:39.349 [2024-12-06 15:45:45.290532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2187b20 (9): Bad file descriptor 00:28:39.349 Controller properly reset. 00:28:39.349 Initializing NVMe Controllers 00:28:39.349 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:39.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:39.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:39.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:39.349 Initialization complete. Launching workers. 00:28:39.349 Starting thread on core 1 00:28:39.349 Starting thread on core 2 00:28:39.349 Starting thread on core 3 00:28:39.349 Starting thread on core 0 00:28:39.349 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:39.349 00:28:39.349 real 0m10.682s 00:28:39.349 user 0m19.250s 00:28:39.349 sys 0m4.791s 00:28:39.349 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.349 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:39.349 ************************************ 00:28:39.349 END TEST nvmf_target_disconnect_tc2 00:28:39.349 ************************************ 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.605 rmmod nvme_tcp 00:28:39.605 rmmod nvme_fabrics 00:28:39.605 rmmod nvme_keyring 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3171169 ']' 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3171169 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3171169 ']' 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3171169 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3171169 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3171169' 00:28:39.605 killing process with pid 3171169 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3171169 00:28:39.605 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3171169 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.863 15:45:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.767 15:45:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.767 00:28:41.767 real 0m19.479s 00:28:41.767 user 0m46.625s 00:28:41.767 sys 0m9.665s 00:28:41.767 15:45:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.767 15:45:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:41.767 ************************************ 00:28:41.767 END TEST nvmf_target_disconnect 00:28:41.767 ************************************ 00:28:42.026 15:45:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:42.026 00:28:42.026 real 5m51.480s 00:28:42.026 user 10m31.664s 00:28:42.026 sys 1m58.500s 00:28:42.026 15:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.026 15:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.026 ************************************ 00:28:42.026 END TEST nvmf_host 00:28:42.026 ************************************ 00:28:42.027 15:45:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:42.027 15:45:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:42.027 15:45:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:42.027 15:45:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:42.027 15:45:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.027 15:45:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:42.027 ************************************ 00:28:42.027 START TEST nvmf_target_core_interrupt_mode 00:28:42.027 ************************************ 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:42.027 * Looking for test storage... 00:28:42.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.027 15:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.027 --rc genhtml_branch_coverage=1 00:28:42.027 --rc genhtml_function_coverage=1 00:28:42.027 --rc genhtml_legend=1 00:28:42.027 --rc geninfo_all_blocks=1 00:28:42.027 --rc geninfo_unexecuted_blocks=1 00:28:42.027 00:28:42.027 ' 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.027 --rc genhtml_branch_coverage=1 00:28:42.027 --rc genhtml_function_coverage=1 00:28:42.027 --rc genhtml_legend=1 00:28:42.027 --rc geninfo_all_blocks=1 00:28:42.027 --rc geninfo_unexecuted_blocks=1 00:28:42.027 00:28:42.027 ' 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.027 --rc genhtml_branch_coverage=1 00:28:42.027 --rc genhtml_function_coverage=1 00:28:42.027 --rc genhtml_legend=1 00:28:42.027 --rc geninfo_all_blocks=1 00:28:42.027 --rc geninfo_unexecuted_blocks=1 00:28:42.027 00:28:42.027 ' 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.027 --rc genhtml_branch_coverage=1 00:28:42.027 --rc genhtml_function_coverage=1 00:28:42.027 --rc genhtml_legend=1 00:28:42.027 --rc geninfo_all_blocks=1 00:28:42.027 --rc geninfo_unexecuted_blocks=1 00:28:42.027 00:28:42.027 ' 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.027 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.287 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:42.288 ************************************ 00:28:42.288 START TEST nvmf_abort 00:28:42.288 ************************************ 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:42.288 * Looking for test storage... 00:28:42.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:42.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.288 --rc genhtml_branch_coverage=1 00:28:42.288 --rc genhtml_function_coverage=1 00:28:42.288 --rc genhtml_legend=1 00:28:42.288 --rc geninfo_all_blocks=1 00:28:42.288 --rc geninfo_unexecuted_blocks=1 00:28:42.288 00:28:42.288 ' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:42.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.288 --rc genhtml_branch_coverage=1 00:28:42.288 --rc genhtml_function_coverage=1 00:28:42.288 --rc genhtml_legend=1 00:28:42.288 --rc geninfo_all_blocks=1 00:28:42.288 --rc geninfo_unexecuted_blocks=1 00:28:42.288 00:28:42.288 ' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:42.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.288 --rc genhtml_branch_coverage=1 00:28:42.288 --rc genhtml_function_coverage=1 00:28:42.288 --rc genhtml_legend=1 00:28:42.288 --rc geninfo_all_blocks=1 00:28:42.288 --rc geninfo_unexecuted_blocks=1 00:28:42.288 00:28:42.288 ' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:42.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.288 --rc genhtml_branch_coverage=1 00:28:42.288 --rc genhtml_function_coverage=1 00:28:42.288 --rc genhtml_legend=1 00:28:42.288 --rc geninfo_all_blocks=1 00:28:42.288 --rc geninfo_unexecuted_blocks=1 00:28:42.288 00:28:42.288 ' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.288 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.289 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.289 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.289 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.289 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.548 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.548 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.549 15:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:49.123 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:49.124 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:49.124 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:49.124 Found net devices under 0000:86:00.0: cvl_0_0 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:49.124 Found net devices under 0000:86:00.1: cvl_0_1 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.124 15:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:49.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:28:49.124 00:28:49.124 --- 10.0.0.2 ping statistics --- 00:28:49.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.124 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:28:49.124 00:28:49.124 --- 10.0.0.1 ping statistics --- 00:28:49.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.124 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3175917 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3175917 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3175917 ']' 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.124 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.124 [2024-12-06 15:45:54.299275] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:49.124 [2024-12-06 15:45:54.300173] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:28:49.125 [2024-12-06 15:45:54.300208] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.125 [2024-12-06 15:45:54.378123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:49.125 [2024-12-06 15:45:54.419347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.125 [2024-12-06 15:45:54.419385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.125 [2024-12-06 15:45:54.419392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.125 [2024-12-06 15:45:54.419398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.125 [2024-12-06 15:45:54.419403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.125 [2024-12-06 15:45:54.420756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.125 [2024-12-06 15:45:54.420866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.125 [2024-12-06 15:45:54.420866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.125 [2024-12-06 15:45:54.487746] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:49.125 [2024-12-06 15:45:54.488457] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:49.125 [2024-12-06 15:45:54.488590] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:49.125 [2024-12-06 15:45:54.488735] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 [2024-12-06 15:45:54.553725] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 Malloc0 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 Delay0 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 [2024-12-06 15:45:54.637608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.125 15:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:49.125 [2024-12-06 15:45:54.725230] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:51.024 Initializing NVMe Controllers 00:28:51.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:51.024 controller IO queue size 128 less than required 00:28:51.024 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:51.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:51.024 Initialization complete. Launching workers. 00:28:51.024 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37916 00:28:51.024 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37977, failed to submit 66 00:28:51.024 success 37916, unsuccessful 61, failed 0 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.024 rmmod nvme_tcp 00:28:51.024 rmmod nvme_fabrics 00:28:51.024 rmmod nvme_keyring 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3175917 ']' 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3175917 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3175917 ']' 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3175917 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175917 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175917' 00:28:51.024 killing process with pid 3175917 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3175917 00:28:51.024 15:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3175917 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.283 15:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.184 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.443 00:28:53.443 real 0m11.101s 00:28:53.443 user 0m10.269s 00:28:53.443 sys 0m5.594s 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.443 ************************************ 00:28:53.443 END TEST nvmf_abort 00:28:53.443 ************************************ 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:53.443 ************************************ 00:28:53.443 START TEST nvmf_ns_hotplug_stress 00:28:53.443 ************************************ 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:53.443 * Looking for test storage... 00:28:53.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:53.443 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:53.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.444 --rc genhtml_branch_coverage=1 00:28:53.444 --rc genhtml_function_coverage=1 00:28:53.444 --rc genhtml_legend=1 00:28:53.444 --rc geninfo_all_blocks=1 00:28:53.444 --rc geninfo_unexecuted_blocks=1 00:28:53.444 00:28:53.444 ' 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:53.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.444 --rc genhtml_branch_coverage=1 00:28:53.444 --rc genhtml_function_coverage=1 00:28:53.444 --rc genhtml_legend=1 00:28:53.444 --rc geninfo_all_blocks=1 00:28:53.444 --rc geninfo_unexecuted_blocks=1 00:28:53.444 00:28:53.444 ' 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:53.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.444 --rc genhtml_branch_coverage=1 00:28:53.444 --rc genhtml_function_coverage=1 00:28:53.444 --rc genhtml_legend=1 00:28:53.444 --rc geninfo_all_blocks=1 00:28:53.444 --rc geninfo_unexecuted_blocks=1 00:28:53.444 00:28:53.444 ' 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:53.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.444 --rc genhtml_branch_coverage=1 00:28:53.444 --rc genhtml_function_coverage=1 00:28:53.444 --rc genhtml_legend=1 00:28:53.444 --rc geninfo_all_blocks=1 00:28:53.444 --rc geninfo_unexecuted_blocks=1 00:28:53.444 00:28:53.444 ' 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.444 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:53.703 15:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:00.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:00.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:00.273 Found net devices under 0000:86:00.0: cvl_0_0 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:00.273 Found net devices under 0000:86:00.1: cvl_0_1 00:29:00.273 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:29:00.274 00:29:00.274 --- 10.0.0.2 ping statistics --- 00:29:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.274 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:29:00.274 00:29:00.274 --- 10.0.0.1 ping statistics --- 00:29:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.274 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3179786 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3179786 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3179786 ']' 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.274 [2024-12-06 15:46:05.409862] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:00.274 [2024-12-06 15:46:05.410773] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:29:00.274 [2024-12-06 15:46:05.410805] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.274 [2024-12-06 15:46:05.474015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.274 [2024-12-06 15:46:05.516508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.274 [2024-12-06 15:46:05.516546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.274 [2024-12-06 15:46:05.516554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.274 [2024-12-06 15:46:05.516560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.274 [2024-12-06 15:46:05.516566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.274 [2024-12-06 15:46:05.517995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.274 [2024-12-06 15:46:05.518198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.274 [2024-12-06 15:46:05.518199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.274 [2024-12-06 15:46:05.585644] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:00.274 [2024-12-06 15:46:05.586412] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:00.274 [2024-12-06 15:46:05.586486] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:00.274 [2024-12-06 15:46:05.586643] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:00.274 [2024-12-06 15:46:05.830862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.274 15:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:00.274 15:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.274 [2024-12-06 15:46:06.215405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.274 15:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:00.533 15:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:00.792 Malloc0 00:29:00.792 15:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:01.051 Delay0 00:29:01.051 15:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.051 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:01.310 NULL1 00:29:01.310 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:01.570 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3180176 00:29:01.570 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:01.570 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:01.570 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.829 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.829 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:01.829 15:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:02.088 true 00:29:02.088 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:02.088 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.346 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.605 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:02.605 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:02.864 true 00:29:02.864 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:02.864 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.122 15:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.122 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:03.122 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:03.380 true 00:29:03.380 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:03.380 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.639 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.897 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:03.897 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:04.156 true 00:29:04.156 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:04.156 15:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.414 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.414 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:04.414 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:04.671 true 00:29:04.671 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:04.672 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.931 15:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.190 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:05.190 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:05.447 true 00:29:05.447 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:05.448 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.448 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.705 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:05.705 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:05.962 true 00:29:05.963 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:05.963 15:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.221 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.479 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:06.479 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:06.479 true 00:29:06.738 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:06.738 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.738 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.997 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:06.997 15:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:07.255 true 00:29:07.255 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:07.255 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.514 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.772 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:07.772 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:08.031 true 00:29:08.031 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:08.031 15:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.031 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.290 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:08.290 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:08.549 true 00:29:08.549 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:08.549 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.808 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.067 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:09.067 15:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:09.327 true 00:29:09.327 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:09.327 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.587 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.587 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:09.587 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:09.844 true 00:29:09.844 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:09.844 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.101 15:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.358 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:10.358 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:10.616 true 00:29:10.616 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:10.616 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.874 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.874 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:10.874 15:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:11.132 true 00:29:11.132 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:11.132 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.390 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.647 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:11.647 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:11.905 true 00:29:11.905 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:11.905 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.163 15:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.163 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:12.163 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:12.420 true 00:29:12.420 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:12.420 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.678 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.937 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:12.937 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:13.194 true 00:29:13.194 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:13.194 15:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.453 15:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.453 15:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:13.453 15:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:13.711 true 00:29:13.711 15:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:13.712 15:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.970 15:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.227 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:14.227 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:14.484 true 00:29:14.484 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:14.484 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.741 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.741 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:14.741 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:15.000 true 00:29:15.000 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:15.000 15:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.257 15:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.515 15:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:15.516 15:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:15.774 true 00:29:15.774 15:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:15.774 15:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.033 15:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.033 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:16.033 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:16.291 true 00:29:16.291 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:16.291 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.550 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.809 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:16.809 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:17.082 true 00:29:17.082 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:17.082 15:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.082 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.339 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:17.340 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:17.597 true 00:29:17.597 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:17.597 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.884 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.190 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:18.190 15:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:18.190 true 00:29:18.190 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:18.190 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.447 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.705 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:18.705 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:18.963 true 00:29:18.963 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:18.963 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.221 15:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.221 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:19.221 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:19.479 true 00:29:19.479 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:19.479 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.737 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.995 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:19.995 15:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:20.253 true 00:29:20.253 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:20.253 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.511 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.511 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:20.511 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:20.770 true 00:29:20.770 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:20.770 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.028 15:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.285 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:29:21.285 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:29:21.543 true 00:29:21.543 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:21.543 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.800 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.800 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:29:21.800 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:29:22.058 true 00:29:22.058 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:22.058 15:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.316 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.575 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:29:22.575 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:29:22.834 true 00:29:22.834 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:22.834 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.092 15:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.092 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:29:23.092 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:29:23.350 true 00:29:23.350 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:23.350 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.607 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.866 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:29:23.866 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:29:24.124 true 00:29:24.124 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:24.124 15:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.382 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.640 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:29:24.640 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:29:24.640 true 00:29:24.640 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:24.640 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.898 15:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.157 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:29:25.157 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:29:25.415 true 00:29:25.415 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:25.415 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.673 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.673 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:29:25.673 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:29:25.930 true 00:29:25.930 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:25.930 15:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.188 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.446 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:29:26.446 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:29:26.705 true 00:29:26.705 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:26.705 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.963 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.963 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:29:26.963 15:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:29:27.222 true 00:29:27.222 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:27.222 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.480 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.738 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:29:27.738 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:29:27.996 true 00:29:27.996 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:27.996 15:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.255 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.255 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:29:28.255 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:29:28.514 true 00:29:28.514 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:28.514 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.772 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.030 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:29:29.030 15:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:29:29.030 true 00:29:29.288 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:29.288 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.288 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.545 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:29:29.545 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:29:29.802 true 00:29:29.802 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:29.802 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.060 15:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.317 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:29:30.317 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:29:30.575 true 00:29:30.575 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:30.575 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.575 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.834 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:29:30.834 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:29:31.092 true 00:29:31.093 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:31.093 15:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.351 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.609 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:29:31.609 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:29:31.609 true 00:29:31.609 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:31.609 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.875 Initializing NVMe Controllers 00:29:31.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.875 Controller IO queue size 128, less than required. 00:29:31.875 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:31.875 Initialization complete. Launching workers. 00:29:31.875 ======================================================== 00:29:31.875 Latency(us) 00:29:31.875 Device Information : IOPS MiB/s Average min max 00:29:31.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28101.23 13.72 4555.02 1556.49 8275.84 00:29:31.875 ======================================================== 00:29:31.875 Total : 28101.23 13.72 4555.02 1556.49 8275.84 00:29:31.875 00:29:31.875 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.133 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:29:32.133 15:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:29:32.390 true 00:29:32.390 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3180176 00:29:32.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3180176) - No such process 00:29:32.390 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3180176 00:29:32.390 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.648 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.648 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:32.648 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:32.648 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:32.648 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.648 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:32.905 null0 00:29:32.905 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.905 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.905 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:33.163 null1 00:29:33.164 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.164 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.164 15:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:33.164 null2 00:29:33.422 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.422 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.422 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:33.422 null3 00:29:33.422 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.422 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.422 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:33.681 null4 00:29:33.681 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.681 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.681 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:33.939 null5 00:29:33.939 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.939 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.939 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:33.939 null6 00:29:34.198 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.199 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.199 15:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:34.199 null7 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3185571 3185572 3185574 3185576 3185578 3185580 3185582 3185583 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.199 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.458 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.717 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.976 15:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.234 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.492 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.492 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.492 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.492 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.493 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.752 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.010 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.010 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.010 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.011 15:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.011 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.011 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.270 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:36.527 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.527 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.527 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.527 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.527 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.527 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.528 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.528 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.784 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.784 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.784 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.784 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.785 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.043 15:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.043 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.302 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.560 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.561 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.561 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.561 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.561 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.819 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:38.076 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.077 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.077 15:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:38.077 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.334 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.335 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.335 rmmod nvme_tcp 00:29:38.335 rmmod nvme_fabrics 00:29:38.593 rmmod nvme_keyring 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3179786 ']' 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3179786 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3179786 ']' 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3179786 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3179786 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.593 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3179786' 00:29:38.593 killing process with pid 3179786 00:29:38.594 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3179786 00:29:38.594 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3179786 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.852 15:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:40.753 00:29:40.753 real 0m47.410s 00:29:40.753 user 3m2.615s 00:29:40.753 sys 0m21.778s 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:40.753 ************************************ 00:29:40.753 END TEST nvmf_ns_hotplug_stress 00:29:40.753 ************************************ 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:40.753 ************************************ 00:29:40.753 START TEST nvmf_delete_subsystem 00:29:40.753 ************************************ 00:29:40.753 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:41.012 * Looking for test storage... 00:29:41.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:41.012 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.012 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.013 --rc genhtml_branch_coverage=1 00:29:41.013 --rc genhtml_function_coverage=1 00:29:41.013 --rc genhtml_legend=1 00:29:41.013 --rc geninfo_all_blocks=1 00:29:41.013 --rc geninfo_unexecuted_blocks=1 00:29:41.013 00:29:41.013 ' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.013 --rc genhtml_branch_coverage=1 00:29:41.013 --rc genhtml_function_coverage=1 00:29:41.013 --rc genhtml_legend=1 00:29:41.013 --rc geninfo_all_blocks=1 00:29:41.013 --rc geninfo_unexecuted_blocks=1 00:29:41.013 00:29:41.013 ' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.013 --rc genhtml_branch_coverage=1 00:29:41.013 --rc genhtml_function_coverage=1 00:29:41.013 --rc genhtml_legend=1 00:29:41.013 --rc geninfo_all_blocks=1 00:29:41.013 --rc geninfo_unexecuted_blocks=1 00:29:41.013 00:29:41.013 ' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.013 --rc genhtml_branch_coverage=1 00:29:41.013 --rc genhtml_function_coverage=1 00:29:41.013 --rc genhtml_legend=1 00:29:41.013 --rc geninfo_all_blocks=1 00:29:41.013 --rc geninfo_unexecuted_blocks=1 00:29:41.013 00:29:41.013 ' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.013 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.014 15:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:47.581 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:47.581 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:47.581 Found net devices under 0000:86:00.0: cvl_0_0 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:47.581 Found net devices under 0000:86:00.1: cvl_0_1 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.581 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:29:47.582 00:29:47.582 --- 10.0.0.2 ping statistics --- 00:29:47.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.582 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:29:47.582 00:29:47.582 --- 10.0.0.1 ping statistics --- 00:29:47.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.582 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3189910 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3189910 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3189910 ']' 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.582 15:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 [2024-12-06 15:46:52.876307] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.582 [2024-12-06 15:46:52.877334] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:29:47.582 [2024-12-06 15:46:52.877399] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.582 [2024-12-06 15:46:52.957688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:47.582 [2024-12-06 15:46:52.997256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.582 [2024-12-06 15:46:52.997291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.582 [2024-12-06 15:46:52.997298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.582 [2024-12-06 15:46:52.997305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.582 [2024-12-06 15:46:52.997311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.582 [2024-12-06 15:46:52.998507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.582 [2024-12-06 15:46:52.998508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.582 [2024-12-06 15:46:53.066679] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.582 [2024-12-06 15:46:53.067261] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:47.582 [2024-12-06 15:46:53.067424] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 [2024-12-06 15:46:53.147291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 [2024-12-06 15:46:53.175671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 NULL1 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 Delay0 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.582 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3189963 00:29:47.583 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:47.583 15:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:47.583 [2024-12-06 15:46:53.290731] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:49.482 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:49.482 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.482 15:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 starting I/O failed: -6 00:29:49.482 starting I/O failed: -6 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Write completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 Read completed with error (sct=0, sc=8) 00:29:49.482 starting I/O failed: -6 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 [2024-12-06 15:46:55.347617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c54a0 is same with the state(6) to be set 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Write completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:49.483 Read completed with error (sct=0, sc=8) 00:29:50.417 [2024-12-06 15:46:56.304876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c69b0 is same with the state(6) to be set 00:29:50.417 Write completed with error (sct=0, sc=8) 00:29:50.417 Write completed with error (sct=0, sc=8) 00:29:50.417 Write completed with error (sct=0, sc=8) 00:29:50.417 Read completed with error (sct=0, sc=8) 00:29:50.417 Read completed with error (sct=0, sc=8) 00:29:50.417 Read completed with error (sct=0, sc=8) 00:29:50.417 Write completed with error (sct=0, sc=8) 00:29:50.417 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 [2024-12-06 15:46:56.344137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99e400d7e0 is same with the state(6) to be set 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 [2024-12-06 15:46:56.344425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99e400d020 is same with the state(6) to be set 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 [2024-12-06 15:46:56.350316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c5680 is same with the state(6) to be set 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Write completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 Read completed with error (sct=0, sc=8) 00:29:50.418 [2024-12-06 15:46:56.350829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4c52c0 is same with the state(6) to be set 00:29:50.418 Initializing NVMe Controllers 00:29:50.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.418 Controller IO queue size 128, less than required. 00:29:50.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:50.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:50.418 Initialization complete. Launching workers. 00:29:50.418 ======================================================== 00:29:50.418 Latency(us) 00:29:50.418 Device Information : IOPS MiB/s Average min max 00:29:50.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.19 0.08 941563.11 293.81 2001857.69 00:29:50.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.22 0.08 939564.90 336.64 1995636.05 00:29:50.418 ======================================================== 00:29:50.418 Total : 321.41 0.16 940573.25 293.81 2001857.69 00:29:50.418 00:29:50.418 [2024-12-06 15:46:56.351202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4c69b0 (9): Bad file descriptor 00:29:50.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:50.418 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.418 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:50.418 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3189963 00:29:50.418 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3189963 00:29:50.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3189963) - No such process 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3189963 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3189963 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3189963 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.985 [2024-12-06 15:46:56.883591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3190493 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:50.985 15:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.985 [2024-12-06 15:46:56.950575] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:51.549 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.549 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:51.549 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.114 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.114 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:52.114 15:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.680 15:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.680 15:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:52.680 15:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.938 15:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.938 15:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:52.938 15:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:53.505 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:53.505 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:53.505 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:54.071 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:54.071 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:54.071 15:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:54.330 Initializing NVMe Controllers 00:29:54.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.330 Controller IO queue size 128, less than required. 00:29:54.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:54.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:54.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:54.330 Initialization complete. Launching workers. 00:29:54.330 ======================================================== 00:29:54.330 Latency(us) 00:29:54.330 Device Information : IOPS MiB/s Average min max 00:29:54.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002875.29 1000137.12 1042182.88 00:29:54.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004568.17 1000228.52 1041967.21 00:29:54.330 ======================================================== 00:29:54.330 Total : 256.00 0.12 1003721.73 1000137.12 1042182.88 00:29:54.330 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3190493 00:29:54.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3190493) - No such process 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3190493 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.590 rmmod nvme_tcp 00:29:54.590 rmmod nvme_fabrics 00:29:54.590 rmmod nvme_keyring 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3189910 ']' 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3189910 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3189910 ']' 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3189910 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3189910 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3189910' 00:29:54.590 killing process with pid 3189910 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3189910 00:29:54.590 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3189910 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.850 15:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.389 00:29:57.389 real 0m16.041s 00:29:57.389 user 0m26.051s 00:29:57.389 sys 0m5.914s 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:57.389 ************************************ 00:29:57.389 END TEST nvmf_delete_subsystem 00:29:57.389 ************************************ 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:57.389 ************************************ 00:29:57.389 START TEST nvmf_host_management 00:29:57.389 ************************************ 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:57.389 * Looking for test storage... 00:29:57.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:57.389 15:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.389 --rc genhtml_branch_coverage=1 00:29:57.389 --rc genhtml_function_coverage=1 00:29:57.389 --rc genhtml_legend=1 00:29:57.389 --rc geninfo_all_blocks=1 00:29:57.389 --rc geninfo_unexecuted_blocks=1 00:29:57.389 00:29:57.389 ' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.389 --rc genhtml_branch_coverage=1 00:29:57.389 --rc genhtml_function_coverage=1 00:29:57.389 --rc genhtml_legend=1 00:29:57.389 --rc geninfo_all_blocks=1 00:29:57.389 --rc geninfo_unexecuted_blocks=1 00:29:57.389 00:29:57.389 ' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.389 --rc genhtml_branch_coverage=1 00:29:57.389 --rc genhtml_function_coverage=1 00:29:57.389 --rc genhtml_legend=1 00:29:57.389 --rc geninfo_all_blocks=1 00:29:57.389 --rc geninfo_unexecuted_blocks=1 00:29:57.389 00:29:57.389 ' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:57.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.389 --rc genhtml_branch_coverage=1 00:29:57.389 --rc genhtml_function_coverage=1 00:29:57.389 --rc genhtml_legend=1 00:29:57.389 --rc geninfo_all_blocks=1 00:29:57.389 --rc geninfo_unexecuted_blocks=1 00:29:57.389 00:29:57.389 ' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.389 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.390 15:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:03.955 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.956 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.956 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.956 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:30:03.956 00:30:03.956 --- 10.0.0.2 ping statistics --- 00:30:03.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.956 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:30:03.956 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:03.957 00:30:03.957 --- 10.0.0.1 ping statistics --- 00:30:03.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.957 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:03.957 15:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3194639 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3194639 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3194639 ']' 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.957 [2024-12-06 15:47:09.060370] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.957 [2024-12-06 15:47:09.061275] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:30:03.957 [2024-12-06 15:47:09.061308] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.957 [2024-12-06 15:47:09.140882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.957 [2024-12-06 15:47:09.182761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.957 [2024-12-06 15:47:09.182796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.957 [2024-12-06 15:47:09.182803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.957 [2024-12-06 15:47:09.182809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.957 [2024-12-06 15:47:09.182814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.957 [2024-12-06 15:47:09.184280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.957 [2024-12-06 15:47:09.184428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.957 [2024-12-06 15:47:09.184457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.957 [2024-12-06 15:47:09.184457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:03.957 [2024-12-06 15:47:09.252273] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:03.957 [2024-12-06 15:47:09.253062] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.957 [2024-12-06 15:47:09.253144] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:03.957 [2024-12-06 15:47:09.253388] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:03.957 [2024-12-06 15:47:09.253437] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.957 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.957 [2024-12-06 15:47:09.933166] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.216 15:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.216 Malloc0 00:30:04.216 [2024-12-06 15:47:10.021477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3194789 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3194789 /var/tmp/bdevperf.sock 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3194789 ']' 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:04.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:04.216 { 00:30:04.216 "params": { 00:30:04.216 "name": "Nvme$subsystem", 00:30:04.216 "trtype": "$TEST_TRANSPORT", 00:30:04.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:04.216 "adrfam": "ipv4", 00:30:04.216 "trsvcid": "$NVMF_PORT", 00:30:04.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:04.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:04.216 "hdgst": ${hdgst:-false}, 00:30:04.216 "ddgst": ${ddgst:-false} 00:30:04.216 }, 00:30:04.216 "method": "bdev_nvme_attach_controller" 00:30:04.216 } 00:30:04.216 EOF 00:30:04.216 )") 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:04.216 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:04.216 "params": { 00:30:04.216 "name": "Nvme0", 00:30:04.216 "trtype": "tcp", 00:30:04.216 "traddr": "10.0.0.2", 00:30:04.216 "adrfam": "ipv4", 00:30:04.216 "trsvcid": "4420", 00:30:04.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:04.216 "hdgst": false, 00:30:04.216 "ddgst": false 00:30:04.216 }, 00:30:04.216 "method": "bdev_nvme_attach_controller" 00:30:04.216 }' 00:30:04.216 [2024-12-06 15:47:10.119742] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:30:04.216 [2024-12-06 15:47:10.119794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194789 ] 00:30:04.216 [2024-12-06 15:47:10.196896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.475 [2024-12-06 15:47:10.238517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.733 Running I/O for 10 seconds... 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:30:04.734 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.998 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.998 [2024-12-06 15:47:10.960439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 15:47:10.960622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with id:0 cdw10:00000000 cdw11:00000000 00:30:04.998 the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.998 [2024-12-06 15:47:10.960643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-06 15:47:10.960650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with id:0 cdw10:00000000 cdw11:00000000 00:30:04.998 the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with [2024-12-06 15:47:10.960663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:30:04.998 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.998 [2024-12-06 15:47:10.960672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.998 [2024-12-06 15:47:10.960679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.998 [2024-12-06 15:47:10.960686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.998 [2024-12-06 15:47:10.960693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.998 [2024-12-06 15:47:10.960700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with [2024-12-06 15:47:10.960706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d120 is same wthe state(6) to be set 00:30:04.998 ith the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.998 [2024-12-06 15:47:10.960826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2366120 is same with the state(6) to be set 00:30:04.999 [2024-12-06 15:47:10.960990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.999 [2024-12-06 15:47:10.961689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.999 [2024-12-06 15:47:10.961696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.000 [2024-12-06 15:47:10.961983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.961991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1146060 is same with the state(6) to be set 00:30:05.000 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.000 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:05.000 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.000 [2024-12-06 15:47:10.962951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:05.000 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.000 task offset: 98304 on job bdev=Nvme0n1 fails 00:30:05.000 00:30:05.000 Latency(us) 00:30:05.000 [2024-12-06T14:47:10.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.000 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:05.000 Job: Nvme0n1 ended in about 0.40 seconds with error 00:30:05.000 Verification LBA range: start 0x0 length 0x400 00:30:05.000 Nvme0n1 : 0.40 1909.97 119.37 159.16 0.00 30113.36 3744.91 26963.38 00:30:05.000 [2024-12-06T14:47:10.998Z] =================================================================================================================== 00:30:05.000 [2024-12-06T14:47:10.998Z] Total : 1909.97 119.37 159.16 0.00 30113.36 3744.91 26963.38 00:30:05.000 [2024-12-06 15:47:10.965303] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:05.000 [2024-12-06 15:47:10.965322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d120 (9): Bad file descriptor 00:30:05.000 [2024-12-06 15:47:10.966374] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:05.000 [2024-12-06 15:47:10.966438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:05.000 [2024-12-06 15:47:10.966460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.000 [2024-12-06 15:47:10.966474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:05.000 [2024-12-06 15:47:10.966482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:05.000 [2024-12-06 15:47:10.966488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:05.000 [2024-12-06 15:47:10.966496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf2d120 00:30:05.000 [2024-12-06 15:47:10.966514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2d120 (9): Bad file descriptor 00:30:05.000 [2024-12-06 15:47:10.966525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:05.000 [2024-12-06 15:47:10.966532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:05.000 [2024-12-06 15:47:10.966540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:05.000 [2024-12-06 15:47:10.966549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:05.000 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.000 15:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3194789 00:30:06.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3194789) - No such process 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.028 { 00:30:06.028 "params": { 00:30:06.028 "name": "Nvme$subsystem", 00:30:06.028 "trtype": "$TEST_TRANSPORT", 00:30:06.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.028 "adrfam": "ipv4", 00:30:06.028 "trsvcid": "$NVMF_PORT", 00:30:06.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.028 "hdgst": ${hdgst:-false}, 00:30:06.028 "ddgst": ${ddgst:-false} 00:30:06.028 }, 00:30:06.028 "method": "bdev_nvme_attach_controller" 00:30:06.028 } 00:30:06.028 EOF 00:30:06.028 )") 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:06.028 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:06.029 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:06.029 15:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.029 "params": { 00:30:06.029 "name": "Nvme0", 00:30:06.029 "trtype": "tcp", 00:30:06.029 "traddr": "10.0.0.2", 00:30:06.029 "adrfam": "ipv4", 00:30:06.029 "trsvcid": "4420", 00:30:06.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:06.029 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:06.029 "hdgst": false, 00:30:06.029 "ddgst": false 00:30:06.029 }, 00:30:06.029 "method": "bdev_nvme_attach_controller" 00:30:06.029 }' 00:30:06.298 [2024-12-06 15:47:12.030875] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:30:06.298 [2024-12-06 15:47:12.030928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195161 ] 00:30:06.298 [2024-12-06 15:47:12.106548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.298 [2024-12-06 15:47:12.146365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.555 Running I/O for 1 seconds... 00:30:07.746 2048.00 IOPS, 128.00 MiB/s 00:30:07.746 Latency(us) 00:30:07.746 [2024-12-06T14:47:13.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.746 Verification LBA range: start 0x0 length 0x400 00:30:07.746 Nvme0n1 : 1.07 1980.86 123.80 0.00 0.00 30651.39 7739.49 48434.22 00:30:07.746 [2024-12-06T14:47:13.744Z] =================================================================================================================== 00:30:07.746 [2024-12-06T14:47:13.744Z] Total : 1980.86 123.80 0.00 0.00 30651.39 7739.49 48434.22 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.746 rmmod nvme_tcp 00:30:07.746 rmmod nvme_fabrics 00:30:07.746 rmmod nvme_keyring 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3194639 ']' 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3194639 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3194639 ']' 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3194639 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.746 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194639 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194639' 00:30:08.005 killing process with pid 3194639 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3194639 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3194639 00:30:08.005 [2024-12-06 15:47:13.941068] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:08.005 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.006 15:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:10.539 00:30:10.539 real 0m13.192s 00:30:10.539 user 0m19.015s 00:30:10.539 sys 0m6.420s 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.539 ************************************ 00:30:10.539 END TEST nvmf_host_management 00:30:10.539 ************************************ 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:10.539 ************************************ 00:30:10.539 START TEST nvmf_lvol 00:30:10.539 ************************************ 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:10.539 * Looking for test storage... 00:30:10.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:10.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.539 --rc genhtml_branch_coverage=1 00:30:10.539 --rc genhtml_function_coverage=1 00:30:10.539 --rc genhtml_legend=1 00:30:10.539 --rc geninfo_all_blocks=1 00:30:10.539 --rc geninfo_unexecuted_blocks=1 00:30:10.539 00:30:10.539 ' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:10.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.539 --rc genhtml_branch_coverage=1 00:30:10.539 --rc genhtml_function_coverage=1 00:30:10.539 --rc genhtml_legend=1 00:30:10.539 --rc geninfo_all_blocks=1 00:30:10.539 --rc geninfo_unexecuted_blocks=1 00:30:10.539 00:30:10.539 ' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:10.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.539 --rc genhtml_branch_coverage=1 00:30:10.539 --rc genhtml_function_coverage=1 00:30:10.539 --rc genhtml_legend=1 00:30:10.539 --rc geninfo_all_blocks=1 00:30:10.539 --rc geninfo_unexecuted_blocks=1 00:30:10.539 00:30:10.539 ' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:10.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.539 --rc genhtml_branch_coverage=1 00:30:10.539 --rc genhtml_function_coverage=1 00:30:10.539 --rc genhtml_legend=1 00:30:10.539 --rc geninfo_all_blocks=1 00:30:10.539 --rc geninfo_unexecuted_blocks=1 00:30:10.539 00:30:10.539 ' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.539 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.540 15:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:17.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:17.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:17.110 Found net devices under 0000:86:00.0: cvl_0_0 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:17.110 Found net devices under 0000:86:00.1: cvl_0_1 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.110 15:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.110 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.110 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.110 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:17.110 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.110 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.110 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:17.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:30:17.111 00:30:17.111 --- 10.0.0.2 ping statistics --- 00:30:17.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.111 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:30:17.111 00:30:17.111 --- 10.0.0.1 ping statistics --- 00:30:17.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.111 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3198937 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3198937 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3198937 ']' 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.111 15:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:17.111 [2024-12-06 15:47:22.282096] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:17.111 [2024-12-06 15:47:22.282996] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:30:17.111 [2024-12-06 15:47:22.283031] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.111 [2024-12-06 15:47:22.362216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:17.111 [2024-12-06 15:47:22.404809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.111 [2024-12-06 15:47:22.404843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.111 [2024-12-06 15:47:22.404850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.111 [2024-12-06 15:47:22.404856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.111 [2024-12-06 15:47:22.404861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.111 [2024-12-06 15:47:22.406121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.111 [2024-12-06 15:47:22.406229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.111 [2024-12-06 15:47:22.406231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.111 [2024-12-06 15:47:22.474990] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:17.111 [2024-12-06 15:47:22.475839] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:17.111 [2024-12-06 15:47:22.475851] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:17.111 [2024-12-06 15:47:22.476028] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:17.371 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.371 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:17.371 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.371 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.371 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:17.371 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.371 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:17.371 [2024-12-06 15:47:23.347018] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.630 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:17.630 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:17.630 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:17.889 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:17.889 15:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:18.148 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:18.406 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=281eb785-8819-4d8c-8f22-8d00610b1ea7 00:30:18.406 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 281eb785-8819-4d8c-8f22-8d00610b1ea7 lvol 20 00:30:18.665 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5c7249dc-3204-42df-ad15-c1dcc54f2f99 00:30:18.665 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:18.665 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c7249dc-3204-42df-ad15-c1dcc54f2f99 00:30:18.924 15:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.182 [2024-12-06 15:47:24.978915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.182 15:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.440 15:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3199432 00:30:19.440 15:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:19.440 15:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:20.371 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5c7249dc-3204-42df-ad15-c1dcc54f2f99 MY_SNAPSHOT 00:30:20.629 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=79e75353-313d-43ac-b295-902f86673b9f 00:30:20.629 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5c7249dc-3204-42df-ad15-c1dcc54f2f99 30 00:30:20.887 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 79e75353-313d-43ac-b295-902f86673b9f MY_CLONE 00:30:21.146 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fb6aac1a-8eab-4225-b5d4-04a9d20fe459 00:30:21.146 15:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fb6aac1a-8eab-4225-b5d4-04a9d20fe459 00:30:21.404 15:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3199432 00:30:31.372 Initializing NVMe Controllers 00:30:31.372 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:31.372 Controller IO queue size 128, less than required. 00:30:31.372 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:31.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:31.372 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:31.372 Initialization complete. Launching workers. 00:30:31.372 ======================================================== 00:30:31.372 Latency(us) 00:30:31.372 Device Information : IOPS MiB/s Average min max 00:30:31.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12634.60 49.35 10133.69 1838.44 90219.84 00:30:31.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12511.50 48.87 10230.32 3520.59 52661.08 00:30:31.372 ======================================================== 00:30:31.372 Total : 25146.10 98.23 10181.77 1838.44 90219.84 00:30:31.372 00:30:31.372 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:31.372 15:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c7249dc-3204-42df-ad15-c1dcc54f2f99 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 281eb785-8819-4d8c-8f22-8d00610b1ea7 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.372 rmmod nvme_tcp 00:30:31.372 rmmod nvme_fabrics 00:30:31.372 rmmod nvme_keyring 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3198937 ']' 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3198937 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3198937 ']' 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3198937 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198937 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198937' 00:30:31.372 killing process with pid 3198937 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3198937 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3198937 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.372 15:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.751 00:30:32.751 real 0m22.497s 00:30:32.751 user 0m56.052s 00:30:32.751 sys 0m9.634s 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:32.751 ************************************ 00:30:32.751 END TEST nvmf_lvol 00:30:32.751 ************************************ 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.751 ************************************ 00:30:32.751 START TEST nvmf_lvs_grow 00:30:32.751 ************************************ 00:30:32.751 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:33.011 * Looking for test storage... 00:30:33.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.011 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.012 --rc genhtml_branch_coverage=1 00:30:33.012 --rc genhtml_function_coverage=1 00:30:33.012 --rc genhtml_legend=1 00:30:33.012 --rc geninfo_all_blocks=1 00:30:33.012 --rc geninfo_unexecuted_blocks=1 00:30:33.012 00:30:33.012 ' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.012 --rc genhtml_branch_coverage=1 00:30:33.012 --rc genhtml_function_coverage=1 00:30:33.012 --rc genhtml_legend=1 00:30:33.012 --rc geninfo_all_blocks=1 00:30:33.012 --rc geninfo_unexecuted_blocks=1 00:30:33.012 00:30:33.012 ' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.012 --rc genhtml_branch_coverage=1 00:30:33.012 --rc genhtml_function_coverage=1 00:30:33.012 --rc genhtml_legend=1 00:30:33.012 --rc geninfo_all_blocks=1 00:30:33.012 --rc geninfo_unexecuted_blocks=1 00:30:33.012 00:30:33.012 ' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:33.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.012 --rc genhtml_branch_coverage=1 00:30:33.012 --rc genhtml_function_coverage=1 00:30:33.012 --rc genhtml_legend=1 00:30:33.012 --rc geninfo_all_blocks=1 00:30:33.012 --rc geninfo_unexecuted_blocks=1 00:30:33.012 00:30:33.012 ' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:33.012 15:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:39.586 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:39.587 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:39.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:39.587 Found net devices under 0000:86:00.0: cvl_0_0 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:39.587 Found net devices under 0000:86:00.1: cvl_0_1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:39.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:39.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:30:39.587 00:30:39.587 --- 10.0.0.2 ping statistics --- 00:30:39.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.587 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:39.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:39.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:30:39.587 00:30:39.587 --- 10.0.0.1 ping statistics --- 00:30:39.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:39.587 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3204671 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3204671 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3204671 ']' 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.587 15:47:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.587 [2024-12-06 15:47:44.886779] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:39.587 [2024-12-06 15:47:44.887700] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:30:39.587 [2024-12-06 15:47:44.887734] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.587 [2024-12-06 15:47:44.967434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.587 [2024-12-06 15:47:45.007698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.587 [2024-12-06 15:47:45.007733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.587 [2024-12-06 15:47:45.007740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.587 [2024-12-06 15:47:45.007746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.587 [2024-12-06 15:47:45.007751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.587 [2024-12-06 15:47:45.008291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.587 [2024-12-06 15:47:45.075887] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.587 [2024-12-06 15:47:45.076104] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:39.587 [2024-12-06 15:47:45.308944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:39.587 ************************************ 00:30:39.587 START TEST lvs_grow_clean 00:30:39.587 ************************************ 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:39.587 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:39.846 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:39.846 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:39.846 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:40.105 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:40.105 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:40.105 15:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 lvol 150 00:30:40.364 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=027ede80-c988-43e8-b5b6-6af82dfbf76f 00:30:40.364 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:40.364 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:40.364 [2024-12-06 15:47:46.336685] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:40.364 [2024-12-06 15:47:46.336809] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:40.364 true 00:30:40.364 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:40.364 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:40.623 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:40.623 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:40.882 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 027ede80-c988-43e8-b5b6-6af82dfbf76f 00:30:41.140 15:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.140 [2024-12-06 15:47:47.077143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.140 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3205066 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3205066 /var/tmp/bdevperf.sock 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3205066 ']' 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.399 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:41.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:41.400 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.400 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:41.400 [2024-12-06 15:47:47.331894] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:30:41.400 [2024-12-06 15:47:47.331942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205066 ] 00:30:41.658 [2024-12-06 15:47:47.405165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.658 [2024-12-06 15:47:47.446930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.658 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.658 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:41.659 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:41.918 Nvme0n1 00:30:41.918 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:42.177 [ 00:30:42.177 { 00:30:42.177 "name": "Nvme0n1", 00:30:42.177 "aliases": [ 00:30:42.177 "027ede80-c988-43e8-b5b6-6af82dfbf76f" 00:30:42.177 ], 00:30:42.177 "product_name": "NVMe disk", 00:30:42.177 "block_size": 4096, 00:30:42.177 "num_blocks": 38912, 00:30:42.177 "uuid": "027ede80-c988-43e8-b5b6-6af82dfbf76f", 00:30:42.177 "numa_id": 1, 00:30:42.177 "assigned_rate_limits": { 00:30:42.177 "rw_ios_per_sec": 0, 00:30:42.177 "rw_mbytes_per_sec": 0, 00:30:42.177 "r_mbytes_per_sec": 0, 00:30:42.177 "w_mbytes_per_sec": 0 00:30:42.177 }, 00:30:42.177 "claimed": false, 00:30:42.177 "zoned": false, 00:30:42.177 "supported_io_types": { 00:30:42.177 "read": true, 00:30:42.177 "write": true, 00:30:42.177 "unmap": true, 00:30:42.177 "flush": true, 00:30:42.177 "reset": true, 00:30:42.177 "nvme_admin": true, 00:30:42.177 "nvme_io": true, 00:30:42.178 "nvme_io_md": false, 00:30:42.178 "write_zeroes": true, 00:30:42.178 "zcopy": false, 00:30:42.178 "get_zone_info": false, 00:30:42.178 "zone_management": false, 00:30:42.178 "zone_append": false, 00:30:42.178 "compare": true, 00:30:42.178 "compare_and_write": true, 00:30:42.178 "abort": true, 00:30:42.178 "seek_hole": false, 00:30:42.178 "seek_data": false, 00:30:42.178 "copy": true, 00:30:42.178 "nvme_iov_md": false 00:30:42.178 }, 00:30:42.178 "memory_domains": [ 00:30:42.178 { 00:30:42.178 "dma_device_id": "system", 00:30:42.178 "dma_device_type": 1 00:30:42.178 } 00:30:42.178 ], 00:30:42.178 "driver_specific": { 00:30:42.178 "nvme": [ 00:30:42.178 { 00:30:42.178 "trid": { 00:30:42.178 "trtype": "TCP", 00:30:42.178 "adrfam": "IPv4", 00:30:42.178 "traddr": "10.0.0.2", 00:30:42.178 "trsvcid": "4420", 00:30:42.178 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:42.178 }, 00:30:42.178 "ctrlr_data": { 00:30:42.178 "cntlid": 1, 00:30:42.178 "vendor_id": "0x8086", 00:30:42.178 "model_number": "SPDK bdev Controller", 00:30:42.178 "serial_number": "SPDK0", 00:30:42.178 "firmware_revision": "25.01", 00:30:42.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.178 "oacs": { 00:30:42.178 "security": 0, 00:30:42.178 "format": 0, 00:30:42.178 "firmware": 0, 00:30:42.178 "ns_manage": 0 00:30:42.178 }, 00:30:42.178 "multi_ctrlr": true, 00:30:42.178 "ana_reporting": false 00:30:42.178 }, 00:30:42.178 "vs": { 00:30:42.178 "nvme_version": "1.3" 00:30:42.178 }, 00:30:42.178 "ns_data": { 00:30:42.178 "id": 1, 00:30:42.178 "can_share": true 00:30:42.178 } 00:30:42.178 } 00:30:42.178 ], 00:30:42.178 "mp_policy": "active_passive" 00:30:42.178 } 00:30:42.178 } 00:30:42.178 ] 00:30:42.178 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3205288 00:30:42.178 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:42.178 15:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:42.178 Running I/O for 10 seconds... 00:30:43.115 Latency(us) 00:30:43.115 [2024-12-06T14:47:49.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.115 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:43.115 [2024-12-06T14:47:49.113Z] =================================================================================================================== 00:30:43.115 [2024-12-06T14:47:49.113Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:43.115 00:30:44.053 15:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:44.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.312 Nvme0n1 : 2.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:44.312 [2024-12-06T14:47:50.310Z] =================================================================================================================== 00:30:44.312 [2024-12-06T14:47:50.310Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:44.312 00:30:44.312 true 00:30:44.312 15:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:44.312 15:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:44.571 15:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:44.571 15:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:44.571 15:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3205288 00:30:45.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.140 Nvme0n1 : 3.00 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:30:45.140 [2024-12-06T14:47:51.138Z] =================================================================================================================== 00:30:45.140 [2024-12-06T14:47:51.138Z] Total : 23283.33 90.95 0.00 0.00 0.00 0.00 0.00 00:30:45.140 00:30:46.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.076 Nvme0n1 : 4.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:30:46.076 [2024-12-06T14:47:52.074Z] =================================================================================================================== 00:30:46.076 [2024-12-06T14:47:52.074Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:30:46.076 00:30:47.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.453 Nvme0n1 : 5.00 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:30:47.453 [2024-12-06T14:47:53.451Z] =================================================================================================================== 00:30:47.453 [2024-12-06T14:47:53.451Z] Total : 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:30:47.453 00:30:48.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.390 Nvme0n1 : 6.00 23516.17 91.86 0.00 0.00 0.00 0.00 0.00 00:30:48.390 [2024-12-06T14:47:54.388Z] =================================================================================================================== 00:30:48.390 [2024-12-06T14:47:54.388Z] Total : 23516.17 91.86 0.00 0.00 0.00 0.00 0.00 00:30:48.390 00:30:49.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.326 Nvme0n1 : 7.00 23567.57 92.06 0.00 0.00 0.00 0.00 0.00 00:30:49.326 [2024-12-06T14:47:55.324Z] =================================================================================================================== 00:30:49.326 [2024-12-06T14:47:55.324Z] Total : 23567.57 92.06 0.00 0.00 0.00 0.00 0.00 00:30:49.326 00:30:50.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.283 Nvme0n1 : 8.00 23606.12 92.21 0.00 0.00 0.00 0.00 0.00 00:30:50.283 [2024-12-06T14:47:56.281Z] =================================================================================================================== 00:30:50.283 [2024-12-06T14:47:56.281Z] Total : 23606.12 92.21 0.00 0.00 0.00 0.00 0.00 00:30:50.283 00:30:51.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.218 Nvme0n1 : 9.00 23636.11 92.33 0.00 0.00 0.00 0.00 0.00 00:30:51.218 [2024-12-06T14:47:57.216Z] =================================================================================================================== 00:30:51.218 [2024-12-06T14:47:57.216Z] Total : 23636.11 92.33 0.00 0.00 0.00 0.00 0.00 00:30:51.218 00:30:52.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.154 Nvme0n1 : 10.00 23660.10 92.42 0.00 0.00 0.00 0.00 0.00 00:30:52.154 [2024-12-06T14:47:58.152Z] =================================================================================================================== 00:30:52.154 [2024-12-06T14:47:58.152Z] Total : 23660.10 92.42 0.00 0.00 0.00 0.00 0.00 00:30:52.154 00:30:52.154 00:30:52.154 Latency(us) 00:30:52.154 [2024-12-06T14:47:58.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.154 Nvme0n1 : 10.00 23649.84 92.38 0.00 0.00 5408.66 4868.39 26588.89 00:30:52.154 [2024-12-06T14:47:58.152Z] =================================================================================================================== 00:30:52.154 [2024-12-06T14:47:58.152Z] Total : 23649.84 92.38 0.00 0.00 5408.66 4868.39 26588.89 00:30:52.154 { 00:30:52.154 "results": [ 00:30:52.154 { 00:30:52.154 "job": "Nvme0n1", 00:30:52.154 "core_mask": "0x2", 00:30:52.154 "workload": "randwrite", 00:30:52.154 "status": "finished", 00:30:52.154 "queue_depth": 128, 00:30:52.154 "io_size": 4096, 00:30:52.154 "runtime": 10.004381, 00:30:52.154 "iops": 23649.839005531678, 00:30:52.154 "mibps": 92.38218361535812, 00:30:52.154 "io_failed": 0, 00:30:52.154 "io_timeout": 0, 00:30:52.154 "avg_latency_us": 5408.664590187822, 00:30:52.154 "min_latency_us": 4868.388571428572, 00:30:52.154 "max_latency_us": 26588.891428571427 00:30:52.154 } 00:30:52.154 ], 00:30:52.154 "core_count": 1 00:30:52.154 } 00:30:52.154 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3205066 00:30:52.154 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3205066 ']' 00:30:52.154 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3205066 00:30:52.154 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:52.154 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.154 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3205066 00:30:52.413 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:52.413 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:52.413 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3205066' 00:30:52.413 killing process with pid 3205066 00:30:52.413 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3205066 00:30:52.413 Received shutdown signal, test time was about 10.000000 seconds 00:30:52.413 00:30:52.413 Latency(us) 00:30:52.413 [2024-12-06T14:47:58.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.413 [2024-12-06T14:47:58.411Z] =================================================================================================================== 00:30:52.413 [2024-12-06T14:47:58.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.413 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3205066 00:30:52.413 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.671 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.929 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:52.929 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:52.929 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:52.929 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:52.929 15:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:53.186 [2024-12-06 15:47:59.064778] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:53.186 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:53.444 request: 00:30:53.444 { 00:30:53.444 "uuid": "b3d6e5e0-47be-4b49-aa3a-823060deaaf6", 00:30:53.444 "method": "bdev_lvol_get_lvstores", 00:30:53.444 "req_id": 1 00:30:53.444 } 00:30:53.444 Got JSON-RPC error response 00:30:53.444 response: 00:30:53.444 { 00:30:53.444 "code": -19, 00:30:53.444 "message": "No such device" 00:30:53.444 } 00:30:53.444 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:53.444 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.444 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.444 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.444 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:53.703 aio_bdev 00:30:53.703 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 027ede80-c988-43e8-b5b6-6af82dfbf76f 00:30:53.703 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=027ede80-c988-43e8-b5b6-6af82dfbf76f 00:30:53.703 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:53.703 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:53.703 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:53.703 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:53.703 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:53.962 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 027ede80-c988-43e8-b5b6-6af82dfbf76f -t 2000 00:30:53.962 [ 00:30:53.962 { 00:30:53.962 "name": "027ede80-c988-43e8-b5b6-6af82dfbf76f", 00:30:53.962 "aliases": [ 00:30:53.962 "lvs/lvol" 00:30:53.962 ], 00:30:53.962 "product_name": "Logical Volume", 00:30:53.962 "block_size": 4096, 00:30:53.962 "num_blocks": 38912, 00:30:53.962 "uuid": "027ede80-c988-43e8-b5b6-6af82dfbf76f", 00:30:53.962 "assigned_rate_limits": { 00:30:53.962 "rw_ios_per_sec": 0, 00:30:53.962 "rw_mbytes_per_sec": 0, 00:30:53.962 "r_mbytes_per_sec": 0, 00:30:53.962 "w_mbytes_per_sec": 0 00:30:53.962 }, 00:30:53.962 "claimed": false, 00:30:53.962 "zoned": false, 00:30:53.962 "supported_io_types": { 00:30:53.962 "read": true, 00:30:53.962 "write": true, 00:30:53.962 "unmap": true, 00:30:53.962 "flush": false, 00:30:53.962 "reset": true, 00:30:53.962 "nvme_admin": false, 00:30:53.962 "nvme_io": false, 00:30:53.962 "nvme_io_md": false, 00:30:53.962 "write_zeroes": true, 00:30:53.962 "zcopy": false, 00:30:53.962 "get_zone_info": false, 00:30:53.962 "zone_management": false, 00:30:53.962 "zone_append": false, 00:30:53.962 "compare": false, 00:30:53.962 "compare_and_write": false, 00:30:53.962 "abort": false, 00:30:53.962 "seek_hole": true, 00:30:53.962 "seek_data": true, 00:30:53.962 "copy": false, 00:30:53.962 "nvme_iov_md": false 00:30:53.962 }, 00:30:53.962 "driver_specific": { 00:30:53.962 "lvol": { 00:30:53.962 "lvol_store_uuid": "b3d6e5e0-47be-4b49-aa3a-823060deaaf6", 00:30:53.962 "base_bdev": "aio_bdev", 00:30:53.962 "thin_provision": false, 00:30:53.962 "num_allocated_clusters": 38, 00:30:53.962 "snapshot": false, 00:30:53.962 "clone": false, 00:30:53.962 "esnap_clone": false 00:30:53.962 } 00:30:53.962 } 00:30:53.962 } 00:30:53.962 ] 00:30:53.962 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:53.962 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:53.962 15:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:54.220 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:54.220 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:54.220 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:54.479 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:54.479 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 027ede80-c988-43e8-b5b6-6af82dfbf76f 00:30:54.738 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b3d6e5e0-47be-4b49-aa3a-823060deaaf6 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:54.997 00:30:54.997 real 0m15.579s 00:30:54.997 user 0m15.034s 00:30:54.997 sys 0m1.531s 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:54.997 ************************************ 00:30:54.997 END TEST lvs_grow_clean 00:30:54.997 ************************************ 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.997 15:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:55.256 ************************************ 00:30:55.256 START TEST lvs_grow_dirty 00:30:55.256 ************************************ 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:55.256 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:55.518 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:30:55.518 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:30:55.518 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:55.777 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:55.777 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:55.777 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 lvol 150 00:30:56.035 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=356006a3-18ae-44fc-b2e7-8ad2c2f92803 00:30:56.035 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:56.035 15:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:56.035 [2024-12-06 15:48:02.012675] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:56.035 [2024-12-06 15:48:02.012790] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:56.035 true 00:30:56.035 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:30:56.035 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:56.294 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:56.294 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:56.552 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 356006a3-18ae-44fc-b2e7-8ad2c2f92803 00:30:56.811 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.811 [2024-12-06 15:48:02.777086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.811 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3207775 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3207775 /var/tmp/bdevperf.sock 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3207775 ']' 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:57.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.070 15:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:57.070 [2024-12-06 15:48:03.023697] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:30:57.070 [2024-12-06 15:48:03.023745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207775 ] 00:30:57.329 [2024-12-06 15:48:03.100376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.329 [2024-12-06 15:48:03.142670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.329 15:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.329 15:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:57.329 15:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:57.588 Nvme0n1 00:30:57.588 15:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:57.847 [ 00:30:57.847 { 00:30:57.847 "name": "Nvme0n1", 00:30:57.847 "aliases": [ 00:30:57.847 "356006a3-18ae-44fc-b2e7-8ad2c2f92803" 00:30:57.847 ], 00:30:57.847 "product_name": "NVMe disk", 00:30:57.847 "block_size": 4096, 00:30:57.847 "num_blocks": 38912, 00:30:57.847 "uuid": "356006a3-18ae-44fc-b2e7-8ad2c2f92803", 00:30:57.847 "numa_id": 1, 00:30:57.847 "assigned_rate_limits": { 00:30:57.847 "rw_ios_per_sec": 0, 00:30:57.847 "rw_mbytes_per_sec": 0, 00:30:57.847 "r_mbytes_per_sec": 0, 00:30:57.847 "w_mbytes_per_sec": 0 00:30:57.847 }, 00:30:57.847 "claimed": false, 00:30:57.847 "zoned": false, 00:30:57.847 "supported_io_types": { 00:30:57.847 "read": true, 00:30:57.847 "write": true, 00:30:57.847 "unmap": true, 00:30:57.847 "flush": true, 00:30:57.847 "reset": true, 00:30:57.847 "nvme_admin": true, 00:30:57.847 "nvme_io": true, 00:30:57.847 "nvme_io_md": false, 00:30:57.847 "write_zeroes": true, 00:30:57.847 "zcopy": false, 00:30:57.847 "get_zone_info": false, 00:30:57.847 "zone_management": false, 00:30:57.847 "zone_append": false, 00:30:57.847 "compare": true, 00:30:57.847 "compare_and_write": true, 00:30:57.847 "abort": true, 00:30:57.847 "seek_hole": false, 00:30:57.847 "seek_data": false, 00:30:57.847 "copy": true, 00:30:57.847 "nvme_iov_md": false 00:30:57.847 }, 00:30:57.847 "memory_domains": [ 00:30:57.847 { 00:30:57.847 "dma_device_id": "system", 00:30:57.847 "dma_device_type": 1 00:30:57.847 } 00:30:57.847 ], 00:30:57.847 "driver_specific": { 00:30:57.847 "nvme": [ 00:30:57.847 { 00:30:57.847 "trid": { 00:30:57.847 "trtype": "TCP", 00:30:57.847 "adrfam": "IPv4", 00:30:57.847 "traddr": "10.0.0.2", 00:30:57.847 "trsvcid": "4420", 00:30:57.847 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:57.847 }, 00:30:57.847 "ctrlr_data": { 00:30:57.847 "cntlid": 1, 00:30:57.847 "vendor_id": "0x8086", 00:30:57.847 "model_number": "SPDK bdev Controller", 00:30:57.847 "serial_number": "SPDK0", 00:30:57.847 "firmware_revision": "25.01", 00:30:57.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.847 "oacs": { 00:30:57.847 "security": 0, 00:30:57.847 "format": 0, 00:30:57.847 "firmware": 0, 00:30:57.847 "ns_manage": 0 00:30:57.847 }, 00:30:57.847 "multi_ctrlr": true, 00:30:57.847 "ana_reporting": false 00:30:57.847 }, 00:30:57.847 "vs": { 00:30:57.847 "nvme_version": "1.3" 00:30:57.847 }, 00:30:57.847 "ns_data": { 00:30:57.847 "id": 1, 00:30:57.847 "can_share": true 00:30:57.847 } 00:30:57.847 } 00:30:57.847 ], 00:30:57.847 "mp_policy": "active_passive" 00:30:57.847 } 00:30:57.847 } 00:30:57.847 ] 00:30:57.847 15:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:57.847 15:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3207959 00:30:57.847 15:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:57.847 Running I/O for 10 seconds... 00:30:59.235 Latency(us) 00:30:59.235 [2024-12-06T14:48:05.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.235 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:59.235 [2024-12-06T14:48:05.233Z] =================================================================================================================== 00:30:59.235 [2024-12-06T14:48:05.233Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:59.235 00:30:59.803 15:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:30:59.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.803 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:59.803 [2024-12-06T14:48:05.801Z] =================================================================================================================== 00:30:59.803 [2024-12-06T14:48:05.801Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:59.803 00:31:00.061 true 00:31:00.061 15:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:00.061 15:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:00.341 15:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:00.341 15:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:00.341 15:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3207959 00:31:01.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.021 Nvme0n1 : 3.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:01.021 [2024-12-06T14:48:07.019Z] =================================================================================================================== 00:31:01.021 [2024-12-06T14:48:07.019Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:31:01.021 00:31:01.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.955 Nvme0n1 : 4.00 23145.75 90.41 0.00 0.00 0.00 0.00 0.00 00:31:01.955 [2024-12-06T14:48:07.953Z] =================================================================================================================== 00:31:01.955 [2024-12-06T14:48:07.953Z] Total : 23145.75 90.41 0.00 0.00 0.00 0.00 0.00 00:31:01.955 00:31:02.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.891 Nvme0n1 : 5.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:31:02.891 [2024-12-06T14:48:08.889Z] =================================================================================================================== 00:31:02.891 [2024-12-06T14:48:08.889Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:31:02.891 00:31:03.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.827 Nvme0n1 : 6.00 23159.17 90.47 0.00 0.00 0.00 0.00 0.00 00:31:03.827 [2024-12-06T14:48:09.825Z] =================================================================================================================== 00:31:03.828 [2024-12-06T14:48:09.826Z] Total : 23159.17 90.47 0.00 0.00 0.00 0.00 0.00 00:31:03.828 00:31:05.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.205 Nvme0n1 : 7.00 23207.14 90.65 0.00 0.00 0.00 0.00 0.00 00:31:05.205 [2024-12-06T14:48:11.203Z] =================================================================================================================== 00:31:05.205 [2024-12-06T14:48:11.203Z] Total : 23207.14 90.65 0.00 0.00 0.00 0.00 0.00 00:31:05.205 00:31:06.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.142 Nvme0n1 : 8.00 23274.88 90.92 0.00 0.00 0.00 0.00 0.00 00:31:06.142 [2024-12-06T14:48:12.140Z] =================================================================================================================== 00:31:06.142 [2024-12-06T14:48:12.140Z] Total : 23274.88 90.92 0.00 0.00 0.00 0.00 0.00 00:31:06.142 00:31:07.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.079 Nvme0n1 : 9.00 23327.56 91.12 0.00 0.00 0.00 0.00 0.00 00:31:07.079 [2024-12-06T14:48:13.077Z] =================================================================================================================== 00:31:07.079 [2024-12-06T14:48:13.077Z] Total : 23327.56 91.12 0.00 0.00 0.00 0.00 0.00 00:31:07.079 00:31:08.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.024 Nvme0n1 : 10.00 23382.40 91.34 0.00 0.00 0.00 0.00 0.00 00:31:08.024 [2024-12-06T14:48:14.022Z] =================================================================================================================== 00:31:08.024 [2024-12-06T14:48:14.022Z] Total : 23382.40 91.34 0.00 0.00 0.00 0.00 0.00 00:31:08.024 00:31:08.024 00:31:08.024 Latency(us) 00:31:08.024 [2024-12-06T14:48:14.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.024 Nvme0n1 : 10.01 23383.49 91.34 0.00 0.00 5470.99 3229.99 28336.52 00:31:08.024 [2024-12-06T14:48:14.022Z] =================================================================================================================== 00:31:08.024 [2024-12-06T14:48:14.022Z] Total : 23383.49 91.34 0.00 0.00 5470.99 3229.99 28336.52 00:31:08.024 { 00:31:08.024 "results": [ 00:31:08.024 { 00:31:08.024 "job": "Nvme0n1", 00:31:08.024 "core_mask": "0x2", 00:31:08.024 "workload": "randwrite", 00:31:08.024 "status": "finished", 00:31:08.024 "queue_depth": 128, 00:31:08.024 "io_size": 4096, 00:31:08.024 "runtime": 10.005006, 00:31:08.024 "iops": 23383.49422279207, 00:31:08.024 "mibps": 91.34177430778152, 00:31:08.024 "io_failed": 0, 00:31:08.024 "io_timeout": 0, 00:31:08.024 "avg_latency_us": 5470.990271410986, 00:31:08.024 "min_latency_us": 3229.9885714285715, 00:31:08.024 "max_latency_us": 28336.518095238094 00:31:08.024 } 00:31:08.024 ], 00:31:08.024 "core_count": 1 00:31:08.024 } 00:31:08.024 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3207775 00:31:08.024 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3207775 ']' 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3207775 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207775 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207775' 00:31:08.025 killing process with pid 3207775 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3207775 00:31:08.025 Received shutdown signal, test time was about 10.000000 seconds 00:31:08.025 00:31:08.025 Latency(us) 00:31:08.025 [2024-12-06T14:48:14.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.025 [2024-12-06T14:48:14.023Z] =================================================================================================================== 00:31:08.025 [2024-12-06T14:48:14.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:08.025 15:48:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3207775 00:31:08.283 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:08.283 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.542 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:08.542 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3204671 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3204671 00:31:08.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3204671 Killed "${NVMF_APP[@]}" "$@" 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3210008 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3210008 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3210008 ']' 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.801 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:08.801 [2024-12-06 15:48:14.720798] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:08.801 [2024-12-06 15:48:14.721736] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:08.801 [2024-12-06 15:48:14.721775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.061 [2024-12-06 15:48:14.802303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.061 [2024-12-06 15:48:14.842476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.061 [2024-12-06 15:48:14.842514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.061 [2024-12-06 15:48:14.842521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.061 [2024-12-06 15:48:14.842527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.061 [2024-12-06 15:48:14.842532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.061 [2024-12-06 15:48:14.843080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.061 [2024-12-06 15:48:14.911146] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:09.061 [2024-12-06 15:48:14.911364] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:09.061 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.061 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:09.061 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:09.061 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:09.061 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:09.061 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.061 15:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:09.320 [2024-12-06 15:48:15.152530] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:09.320 [2024-12-06 15:48:15.152742] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:09.320 [2024-12-06 15:48:15.152826] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 356006a3-18ae-44fc-b2e7-8ad2c2f92803 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=356006a3-18ae-44fc-b2e7-8ad2c2f92803 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:09.320 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:09.579 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 356006a3-18ae-44fc-b2e7-8ad2c2f92803 -t 2000 00:31:09.579 [ 00:31:09.579 { 00:31:09.579 "name": "356006a3-18ae-44fc-b2e7-8ad2c2f92803", 00:31:09.579 "aliases": [ 00:31:09.579 "lvs/lvol" 00:31:09.579 ], 00:31:09.579 "product_name": "Logical Volume", 00:31:09.579 "block_size": 4096, 00:31:09.579 "num_blocks": 38912, 00:31:09.579 "uuid": "356006a3-18ae-44fc-b2e7-8ad2c2f92803", 00:31:09.579 "assigned_rate_limits": { 00:31:09.579 "rw_ios_per_sec": 0, 00:31:09.579 "rw_mbytes_per_sec": 0, 00:31:09.579 "r_mbytes_per_sec": 0, 00:31:09.579 "w_mbytes_per_sec": 0 00:31:09.579 }, 00:31:09.579 "claimed": false, 00:31:09.579 "zoned": false, 00:31:09.579 "supported_io_types": { 00:31:09.579 "read": true, 00:31:09.579 "write": true, 00:31:09.579 "unmap": true, 00:31:09.579 "flush": false, 00:31:09.579 "reset": true, 00:31:09.579 "nvme_admin": false, 00:31:09.579 "nvme_io": false, 00:31:09.579 "nvme_io_md": false, 00:31:09.579 "write_zeroes": true, 00:31:09.579 "zcopy": false, 00:31:09.579 "get_zone_info": false, 00:31:09.579 "zone_management": false, 00:31:09.579 "zone_append": false, 00:31:09.579 "compare": false, 00:31:09.579 "compare_and_write": false, 00:31:09.579 "abort": false, 00:31:09.579 "seek_hole": true, 00:31:09.579 "seek_data": true, 00:31:09.579 "copy": false, 00:31:09.579 "nvme_iov_md": false 00:31:09.579 }, 00:31:09.579 "driver_specific": { 00:31:09.579 "lvol": { 00:31:09.579 "lvol_store_uuid": "e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7", 00:31:09.579 "base_bdev": "aio_bdev", 00:31:09.579 "thin_provision": false, 00:31:09.579 "num_allocated_clusters": 38, 00:31:09.579 "snapshot": false, 00:31:09.579 "clone": false, 00:31:09.579 "esnap_clone": false 00:31:09.579 } 00:31:09.579 } 00:31:09.579 } 00:31:09.579 ] 00:31:09.579 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:09.579 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:09.579 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:09.837 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:09.837 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:09.837 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:10.095 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:10.095 15:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:10.354 [2024-12-06 15:48:16.103532] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:10.354 request: 00:31:10.354 { 00:31:10.354 "uuid": "e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7", 00:31:10.354 "method": "bdev_lvol_get_lvstores", 00:31:10.354 "req_id": 1 00:31:10.354 } 00:31:10.354 Got JSON-RPC error response 00:31:10.354 response: 00:31:10.354 { 00:31:10.354 "code": -19, 00:31:10.354 "message": "No such device" 00:31:10.354 } 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:10.354 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:10.613 aio_bdev 00:31:10.613 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 356006a3-18ae-44fc-b2e7-8ad2c2f92803 00:31:10.613 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=356006a3-18ae-44fc-b2e7-8ad2c2f92803 00:31:10.613 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:10.614 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:10.614 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:10.614 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:10.614 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.873 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 356006a3-18ae-44fc-b2e7-8ad2c2f92803 -t 2000 00:31:11.132 [ 00:31:11.132 { 00:31:11.132 "name": "356006a3-18ae-44fc-b2e7-8ad2c2f92803", 00:31:11.132 "aliases": [ 00:31:11.132 "lvs/lvol" 00:31:11.132 ], 00:31:11.132 "product_name": "Logical Volume", 00:31:11.132 "block_size": 4096, 00:31:11.132 "num_blocks": 38912, 00:31:11.132 "uuid": "356006a3-18ae-44fc-b2e7-8ad2c2f92803", 00:31:11.132 "assigned_rate_limits": { 00:31:11.132 "rw_ios_per_sec": 0, 00:31:11.132 "rw_mbytes_per_sec": 0, 00:31:11.132 "r_mbytes_per_sec": 0, 00:31:11.132 "w_mbytes_per_sec": 0 00:31:11.132 }, 00:31:11.132 "claimed": false, 00:31:11.132 "zoned": false, 00:31:11.132 "supported_io_types": { 00:31:11.132 "read": true, 00:31:11.132 "write": true, 00:31:11.132 "unmap": true, 00:31:11.132 "flush": false, 00:31:11.132 "reset": true, 00:31:11.132 "nvme_admin": false, 00:31:11.132 "nvme_io": false, 00:31:11.132 "nvme_io_md": false, 00:31:11.132 "write_zeroes": true, 00:31:11.132 "zcopy": false, 00:31:11.132 "get_zone_info": false, 00:31:11.132 "zone_management": false, 00:31:11.132 "zone_append": false, 00:31:11.132 "compare": false, 00:31:11.132 "compare_and_write": false, 00:31:11.132 "abort": false, 00:31:11.132 "seek_hole": true, 00:31:11.132 "seek_data": true, 00:31:11.132 "copy": false, 00:31:11.132 "nvme_iov_md": false 00:31:11.132 }, 00:31:11.132 "driver_specific": { 00:31:11.132 "lvol": { 00:31:11.132 "lvol_store_uuid": "e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7", 00:31:11.132 "base_bdev": "aio_bdev", 00:31:11.132 "thin_provision": false, 00:31:11.132 "num_allocated_clusters": 38, 00:31:11.132 "snapshot": false, 00:31:11.132 "clone": false, 00:31:11.132 "esnap_clone": false 00:31:11.132 } 00:31:11.132 } 00:31:11.132 } 00:31:11.132 ] 00:31:11.132 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:11.132 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:11.132 15:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:11.132 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:11.132 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:11.132 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:11.391 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:11.391 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 356006a3-18ae-44fc-b2e7-8ad2c2f92803 00:31:11.650 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e49f4451-0d2e-46f8-8ac0-f1cbff7eebe7 00:31:11.909 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:12.168 00:31:12.168 real 0m16.928s 00:31:12.168 user 0m34.351s 00:31:12.168 sys 0m3.838s 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:12.168 ************************************ 00:31:12.168 END TEST lvs_grow_dirty 00:31:12.168 ************************************ 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:12.168 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:12.169 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:12.169 15:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:12.169 nvmf_trace.0 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.169 rmmod nvme_tcp 00:31:12.169 rmmod nvme_fabrics 00:31:12.169 rmmod nvme_keyring 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3210008 ']' 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3210008 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3210008 ']' 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3210008 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3210008 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3210008' 00:31:12.169 killing process with pid 3210008 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3210008 00:31:12.169 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3210008 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.428 15:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.984 00:31:14.984 real 0m41.706s 00:31:14.984 user 0m51.808s 00:31:14.984 sys 0m10.349s 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:14.984 ************************************ 00:31:14.984 END TEST nvmf_lvs_grow 00:31:14.984 ************************************ 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.984 ************************************ 00:31:14.984 START TEST nvmf_bdev_io_wait 00:31:14.984 ************************************ 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:14.984 * Looking for test storage... 00:31:14.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.984 --rc genhtml_branch_coverage=1 00:31:14.984 --rc genhtml_function_coverage=1 00:31:14.984 --rc genhtml_legend=1 00:31:14.984 --rc geninfo_all_blocks=1 00:31:14.984 --rc geninfo_unexecuted_blocks=1 00:31:14.984 00:31:14.984 ' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.984 --rc genhtml_branch_coverage=1 00:31:14.984 --rc genhtml_function_coverage=1 00:31:14.984 --rc genhtml_legend=1 00:31:14.984 --rc geninfo_all_blocks=1 00:31:14.984 --rc geninfo_unexecuted_blocks=1 00:31:14.984 00:31:14.984 ' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.984 --rc genhtml_branch_coverage=1 00:31:14.984 --rc genhtml_function_coverage=1 00:31:14.984 --rc genhtml_legend=1 00:31:14.984 --rc geninfo_all_blocks=1 00:31:14.984 --rc geninfo_unexecuted_blocks=1 00:31:14.984 00:31:14.984 ' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:14.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.984 --rc genhtml_branch_coverage=1 00:31:14.984 --rc genhtml_function_coverage=1 00:31:14.984 --rc genhtml_legend=1 00:31:14.984 --rc geninfo_all_blocks=1 00:31:14.984 --rc geninfo_unexecuted_blocks=1 00:31:14.984 00:31:14.984 ' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.984 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.985 15:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:21.553 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.553 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:21.554 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:21.554 Found net devices under 0000:86:00.0: cvl_0_0 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:21.554 Found net devices under 0000:86:00.1: cvl_0_1 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:31:21.554 00:31:21.554 --- 10.0.0.2 ping statistics --- 00:31:21.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.554 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:31:21.554 00:31:21.554 --- 10.0.0.1 ping statistics --- 00:31:21.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.554 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3214102 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3214102 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3214102 ']' 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.554 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.554 [2024-12-06 15:48:26.665041] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.554 [2024-12-06 15:48:26.666051] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:21.554 [2024-12-06 15:48:26.666091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.554 [2024-12-06 15:48:26.746198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.554 [2024-12-06 15:48:26.789018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.554 [2024-12-06 15:48:26.789054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.554 [2024-12-06 15:48:26.789061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.554 [2024-12-06 15:48:26.789067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.555 [2024-12-06 15:48:26.789073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.555 [2024-12-06 15:48:26.790664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.555 [2024-12-06 15:48:26.790775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:21.555 [2024-12-06 15:48:26.790904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.555 [2024-12-06 15:48:26.790904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:21.555 [2024-12-06 15:48:26.791237] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 [2024-12-06 15:48:26.914854] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.555 [2024-12-06 15:48:26.915105] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:21.555 [2024-12-06 15:48:26.915568] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:21.555 [2024-12-06 15:48:26.915592] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 [2024-12-06 15:48:26.927374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 Malloc0 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.555 15:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:21.555 [2024-12-06 15:48:26.999891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3214295 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3214297 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.555 { 00:31:21.555 "params": { 00:31:21.555 "name": "Nvme$subsystem", 00:31:21.555 "trtype": "$TEST_TRANSPORT", 00:31:21.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.555 "adrfam": "ipv4", 00:31:21.555 "trsvcid": "$NVMF_PORT", 00:31:21.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.555 "hdgst": ${hdgst:-false}, 00:31:21.555 "ddgst": ${ddgst:-false} 00:31:21.555 }, 00:31:21.555 "method": "bdev_nvme_attach_controller" 00:31:21.555 } 00:31:21.555 EOF 00:31:21.555 )") 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3214299 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.555 { 00:31:21.555 "params": { 00:31:21.555 "name": "Nvme$subsystem", 00:31:21.555 "trtype": "$TEST_TRANSPORT", 00:31:21.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.555 "adrfam": "ipv4", 00:31:21.555 "trsvcid": "$NVMF_PORT", 00:31:21.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.555 "hdgst": ${hdgst:-false}, 00:31:21.555 "ddgst": ${ddgst:-false} 00:31:21.555 }, 00:31:21.555 "method": "bdev_nvme_attach_controller" 00:31:21.555 } 00:31:21.555 EOF 00:31:21.555 )") 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3214302 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.555 { 00:31:21.555 "params": { 00:31:21.555 "name": "Nvme$subsystem", 00:31:21.555 "trtype": "$TEST_TRANSPORT", 00:31:21.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.555 "adrfam": "ipv4", 00:31:21.555 "trsvcid": "$NVMF_PORT", 00:31:21.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.555 "hdgst": ${hdgst:-false}, 00:31:21.555 "ddgst": ${ddgst:-false} 00:31:21.555 }, 00:31:21.555 "method": "bdev_nvme_attach_controller" 00:31:21.555 } 00:31:21.555 EOF 00:31:21.555 )") 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:21.555 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.556 { 00:31:21.556 "params": { 00:31:21.556 "name": "Nvme$subsystem", 00:31:21.556 "trtype": "$TEST_TRANSPORT", 00:31:21.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.556 "adrfam": "ipv4", 00:31:21.556 "trsvcid": "$NVMF_PORT", 00:31:21.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.556 "hdgst": ${hdgst:-false}, 00:31:21.556 "ddgst": ${ddgst:-false} 00:31:21.556 }, 00:31:21.556 "method": "bdev_nvme_attach_controller" 00:31:21.556 } 00:31:21.556 EOF 00:31:21.556 )") 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3214295 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.556 "params": { 00:31:21.556 "name": "Nvme1", 00:31:21.556 "trtype": "tcp", 00:31:21.556 "traddr": "10.0.0.2", 00:31:21.556 "adrfam": "ipv4", 00:31:21.556 "trsvcid": "4420", 00:31:21.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:21.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.556 "hdgst": false, 00:31:21.556 "ddgst": false 00:31:21.556 }, 00:31:21.556 "method": "bdev_nvme_attach_controller" 00:31:21.556 }' 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.556 "params": { 00:31:21.556 "name": "Nvme1", 00:31:21.556 "trtype": "tcp", 00:31:21.556 "traddr": "10.0.0.2", 00:31:21.556 "adrfam": "ipv4", 00:31:21.556 "trsvcid": "4420", 00:31:21.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:21.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.556 "hdgst": false, 00:31:21.556 "ddgst": false 00:31:21.556 }, 00:31:21.556 "method": "bdev_nvme_attach_controller" 00:31:21.556 }' 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.556 "params": { 00:31:21.556 "name": "Nvme1", 00:31:21.556 "trtype": "tcp", 00:31:21.556 "traddr": "10.0.0.2", 00:31:21.556 "adrfam": "ipv4", 00:31:21.556 "trsvcid": "4420", 00:31:21.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:21.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.556 "hdgst": false, 00:31:21.556 "ddgst": false 00:31:21.556 }, 00:31:21.556 "method": "bdev_nvme_attach_controller" 00:31:21.556 }' 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:21.556 15:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.556 "params": { 00:31:21.556 "name": "Nvme1", 00:31:21.556 "trtype": "tcp", 00:31:21.556 "traddr": "10.0.0.2", 00:31:21.556 "adrfam": "ipv4", 00:31:21.556 "trsvcid": "4420", 00:31:21.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:21.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.556 "hdgst": false, 00:31:21.556 "ddgst": false 00:31:21.556 }, 00:31:21.556 "method": "bdev_nvme_attach_controller" 00:31:21.556 }' 00:31:21.556 [2024-12-06 15:48:27.049266] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:21.556 [2024-12-06 15:48:27.049318] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:21.556 [2024-12-06 15:48:27.052650] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:21.556 [2024-12-06 15:48:27.052650] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:21.556 [2024-12-06 15:48:27.052697] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 15:48:27.052698] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:21.556 --proc-type=auto ] 00:31:21.556 [2024-12-06 15:48:27.055530] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:21.556 [2024-12-06 15:48:27.055575] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:21.556 [2024-12-06 15:48:27.233313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.556 [2024-12-06 15:48:27.275687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:21.556 [2024-12-06 15:48:27.330870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.556 [2024-12-06 15:48:27.372769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:21.556 [2024-12-06 15:48:27.388780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.556 [2024-12-06 15:48:27.424665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:21.556 [2024-12-06 15:48:27.481527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.556 [2024-12-06 15:48:27.534903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:21.815 Running I/O for 1 seconds... 00:31:21.815 Running I/O for 1 seconds... 00:31:21.815 Running I/O for 1 seconds... 00:31:21.815 Running I/O for 1 seconds... 00:31:22.752 12195.00 IOPS, 47.64 MiB/s 00:31:22.752 Latency(us) 00:31:22.752 [2024-12-06T14:48:28.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.752 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:22.752 Nvme1n1 : 1.01 12257.04 47.88 0.00 0.00 10408.87 3620.08 12732.71 00:31:22.752 [2024-12-06T14:48:28.750Z] =================================================================================================================== 00:31:22.752 [2024-12-06T14:48:28.750Z] Total : 12257.04 47.88 0.00 0.00 10408.87 3620.08 12732.71 00:31:22.752 10413.00 IOPS, 40.68 MiB/s 00:31:22.752 Latency(us) 00:31:22.752 [2024-12-06T14:48:28.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.752 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:22.752 Nvme1n1 : 1.01 10468.24 40.89 0.00 0.00 12181.58 4213.03 14917.24 00:31:22.752 [2024-12-06T14:48:28.750Z] =================================================================================================================== 00:31:22.752 [2024-12-06T14:48:28.750Z] Total : 10468.24 40.89 0.00 0.00 12181.58 4213.03 14917.24 00:31:22.752 242640.00 IOPS, 947.81 MiB/s 00:31:22.752 Latency(us) 00:31:22.752 [2024-12-06T14:48:28.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.752 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:22.752 Nvme1n1 : 1.00 242270.78 946.37 0.00 0.00 525.82 220.40 1521.37 00:31:22.752 [2024-12-06T14:48:28.750Z] =================================================================================================================== 00:31:22.752 [2024-12-06T14:48:28.750Z] Total : 242270.78 946.37 0.00 0.00 525.82 220.40 1521.37 00:31:22.752 11513.00 IOPS, 44.97 MiB/s 00:31:22.752 Latency(us) 00:31:22.752 [2024-12-06T14:48:28.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.752 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:22.752 Nvme1n1 : 1.01 11621.11 45.39 0.00 0.00 10989.94 2090.91 16352.79 00:31:22.752 [2024-12-06T14:48:28.750Z] =================================================================================================================== 00:31:22.752 [2024-12-06T14:48:28.750Z] Total : 11621.11 45.39 0.00 0.00 10989.94 2090.91 16352.79 00:31:22.752 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3214297 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3214299 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3214302 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.012 rmmod nvme_tcp 00:31:23.012 rmmod nvme_fabrics 00:31:23.012 rmmod nvme_keyring 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3214102 ']' 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3214102 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3214102 ']' 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3214102 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3214102 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3214102' 00:31:23.012 killing process with pid 3214102 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3214102 00:31:23.012 15:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3214102 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.271 15:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:25.809 00:31:25.809 real 0m10.747s 00:31:25.809 user 0m14.687s 00:31:25.809 sys 0m6.632s 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:25.809 ************************************ 00:31:25.809 END TEST nvmf_bdev_io_wait 00:31:25.809 ************************************ 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:25.809 ************************************ 00:31:25.809 START TEST nvmf_queue_depth 00:31:25.809 ************************************ 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:25.809 * Looking for test storage... 00:31:25.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:25.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.809 --rc genhtml_branch_coverage=1 00:31:25.809 --rc genhtml_function_coverage=1 00:31:25.809 --rc genhtml_legend=1 00:31:25.809 --rc geninfo_all_blocks=1 00:31:25.809 --rc geninfo_unexecuted_blocks=1 00:31:25.809 00:31:25.809 ' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:25.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.809 --rc genhtml_branch_coverage=1 00:31:25.809 --rc genhtml_function_coverage=1 00:31:25.809 --rc genhtml_legend=1 00:31:25.809 --rc geninfo_all_blocks=1 00:31:25.809 --rc geninfo_unexecuted_blocks=1 00:31:25.809 00:31:25.809 ' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:25.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.809 --rc genhtml_branch_coverage=1 00:31:25.809 --rc genhtml_function_coverage=1 00:31:25.809 --rc genhtml_legend=1 00:31:25.809 --rc geninfo_all_blocks=1 00:31:25.809 --rc geninfo_unexecuted_blocks=1 00:31:25.809 00:31:25.809 ' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:25.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.809 --rc genhtml_branch_coverage=1 00:31:25.809 --rc genhtml_function_coverage=1 00:31:25.809 --rc genhtml_legend=1 00:31:25.809 --rc geninfo_all_blocks=1 00:31:25.809 --rc geninfo_unexecuted_blocks=1 00:31:25.809 00:31:25.809 ' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.809 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:25.810 15:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:32.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:32.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:32.382 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:32.383 Found net devices under 0000:86:00.0: cvl_0_0 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:32.383 Found net devices under 0000:86:00.1: cvl_0_1 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:31:32.383 00:31:32.383 --- 10.0.0.2 ping statistics --- 00:31:32.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.383 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:31:32.383 00:31:32.383 --- 10.0.0.1 ping statistics --- 00:31:32.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.383 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3218071 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3218071 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3218071 ']' 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.383 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.383 [2024-12-06 15:48:37.463645] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:32.383 [2024-12-06 15:48:37.464641] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:32.383 [2024-12-06 15:48:37.464682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.383 [2024-12-06 15:48:37.545332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.383 [2024-12-06 15:48:37.586767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.383 [2024-12-06 15:48:37.586800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.383 [2024-12-06 15:48:37.586808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.383 [2024-12-06 15:48:37.586814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.383 [2024-12-06 15:48:37.586819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.383 [2024-12-06 15:48:37.587331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.383 [2024-12-06 15:48:37.655644] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:32.383 [2024-12-06 15:48:37.655857] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 [2024-12-06 15:48:37.719967] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 Malloc0 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 [2024-12-06 15:48:37.804026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3218100 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3218100 /var/tmp/bdevperf.sock 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3218100 ']' 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:32.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.384 15:48:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 [2024-12-06 15:48:37.854838] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:31:32.384 [2024-12-06 15:48:37.854878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218100 ] 00:31:32.384 [2024-12-06 15:48:37.928207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.384 [2024-12-06 15:48:37.970236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.384 15:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.384 15:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:32.384 15:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:32.384 15:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.384 15:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:32.384 NVMe0n1 00:31:32.384 15:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.384 15:48:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:32.384 Running I/O for 10 seconds... 00:31:34.705 12176.00 IOPS, 47.56 MiB/s [2024-12-06T14:48:41.639Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-06T14:48:42.576Z] 12297.67 IOPS, 48.04 MiB/s [2024-12-06T14:48:43.511Z] 12383.50 IOPS, 48.37 MiB/s [2024-12-06T14:48:44.446Z] 12477.80 IOPS, 48.74 MiB/s [2024-12-06T14:48:45.381Z] 12465.17 IOPS, 48.69 MiB/s [2024-12-06T14:48:46.755Z] 12532.43 IOPS, 48.95 MiB/s [2024-12-06T14:48:47.693Z] 12539.25 IOPS, 48.98 MiB/s [2024-12-06T14:48:48.629Z] 12522.56 IOPS, 48.92 MiB/s [2024-12-06T14:48:48.629Z] 12564.70 IOPS, 49.08 MiB/s 00:31:42.631 Latency(us) 00:31:42.631 [2024-12-06T14:48:48.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.631 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:42.631 Verification LBA range: start 0x0 length 0x4000 00:31:42.631 NVMe0n1 : 10.07 12584.38 49.16 0.00 0.00 81079.21 19099.06 52678.46 00:31:42.631 [2024-12-06T14:48:48.630Z] =================================================================================================================== 00:31:42.632 [2024-12-06T14:48:48.630Z] Total : 12584.38 49.16 0.00 0.00 81079.21 19099.06 52678.46 00:31:42.632 { 00:31:42.632 "results": [ 00:31:42.632 { 00:31:42.632 "job": "NVMe0n1", 00:31:42.632 "core_mask": "0x1", 00:31:42.632 "workload": "verify", 00:31:42.632 "status": "finished", 00:31:42.632 "verify_range": { 00:31:42.632 "start": 0, 00:31:42.632 "length": 16384 00:31:42.632 }, 00:31:42.632 "queue_depth": 1024, 00:31:42.632 "io_size": 4096, 00:31:42.632 "runtime": 10.065735, 00:31:42.632 "iops": 12584.376600417158, 00:31:42.632 "mibps": 49.15772109537952, 00:31:42.632 "io_failed": 0, 00:31:42.632 "io_timeout": 0, 00:31:42.632 "avg_latency_us": 81079.2134634191, 00:31:42.632 "min_latency_us": 19099.062857142857, 00:31:42.632 "max_latency_us": 52678.460952380956 00:31:42.632 } 00:31:42.632 ], 00:31:42.632 "core_count": 1 00:31:42.632 } 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3218100 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3218100 ']' 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3218100 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3218100 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3218100' 00:31:42.632 killing process with pid 3218100 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3218100 00:31:42.632 Received shutdown signal, test time was about 10.000000 seconds 00:31:42.632 00:31:42.632 Latency(us) 00:31:42.632 [2024-12-06T14:48:48.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.632 [2024-12-06T14:48:48.630Z] =================================================================================================================== 00:31:42.632 [2024-12-06T14:48:48.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.632 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3218100 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.890 rmmod nvme_tcp 00:31:42.890 rmmod nvme_fabrics 00:31:42.890 rmmod nvme_keyring 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3218071 ']' 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3218071 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3218071 ']' 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3218071 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3218071 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:42.890 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:42.891 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3218071' 00:31:42.891 killing process with pid 3218071 00:31:42.891 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3218071 00:31:42.891 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3218071 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.150 15:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.054 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.054 00:31:45.054 real 0m19.764s 00:31:45.054 user 0m22.788s 00:31:45.054 sys 0m6.320s 00:31:45.054 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.054 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:45.054 ************************************ 00:31:45.054 END TEST nvmf_queue_depth 00:31:45.054 ************************************ 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:45.312 ************************************ 00:31:45.312 START TEST nvmf_target_multipath 00:31:45.312 ************************************ 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:45.312 * Looking for test storage... 00:31:45.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.312 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:45.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.313 --rc genhtml_branch_coverage=1 00:31:45.313 --rc genhtml_function_coverage=1 00:31:45.313 --rc genhtml_legend=1 00:31:45.313 --rc geninfo_all_blocks=1 00:31:45.313 --rc geninfo_unexecuted_blocks=1 00:31:45.313 00:31:45.313 ' 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:45.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.313 --rc genhtml_branch_coverage=1 00:31:45.313 --rc genhtml_function_coverage=1 00:31:45.313 --rc genhtml_legend=1 00:31:45.313 --rc geninfo_all_blocks=1 00:31:45.313 --rc geninfo_unexecuted_blocks=1 00:31:45.313 00:31:45.313 ' 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:45.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.313 --rc genhtml_branch_coverage=1 00:31:45.313 --rc genhtml_function_coverage=1 00:31:45.313 --rc genhtml_legend=1 00:31:45.313 --rc geninfo_all_blocks=1 00:31:45.313 --rc geninfo_unexecuted_blocks=1 00:31:45.313 00:31:45.313 ' 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:45.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.313 --rc genhtml_branch_coverage=1 00:31:45.313 --rc genhtml_function_coverage=1 00:31:45.313 --rc genhtml_legend=1 00:31:45.313 --rc geninfo_all_blocks=1 00:31:45.313 --rc geninfo_unexecuted_blocks=1 00:31:45.313 00:31:45.313 ' 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.313 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.571 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.572 15:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:52.140 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:52.141 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:52.141 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:52.141 Found net devices under 0000:86:00.0: cvl_0_0 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:52.141 Found net devices under 0000:86:00.1: cvl_0_1 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:52.141 15:48:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:52.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:52.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:31:52.141 00:31:52.141 --- 10.0.0.2 ping statistics --- 00:31:52.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.141 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:31:52.141 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:52.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:52.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:31:52.142 00:31:52.142 --- 10.0.0.1 ping statistics --- 00:31:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:52.142 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:52.142 only one NIC for nvmf test 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.142 rmmod nvme_tcp 00:31:52.142 rmmod nvme_fabrics 00:31:52.142 rmmod nvme_keyring 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.142 15:48:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.521 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:53.522 00:31:53.522 real 0m8.317s 00:31:53.522 user 0m1.787s 00:31:53.522 sys 0m4.523s 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:53.522 ************************************ 00:31:53.522 END TEST nvmf_target_multipath 00:31:53.522 ************************************ 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:53.522 ************************************ 00:31:53.522 START TEST nvmf_zcopy 00:31:53.522 ************************************ 00:31:53.522 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:53.782 * Looking for test storage... 00:31:53.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.782 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.783 --rc genhtml_branch_coverage=1 00:31:53.783 --rc genhtml_function_coverage=1 00:31:53.783 --rc genhtml_legend=1 00:31:53.783 --rc geninfo_all_blocks=1 00:31:53.783 --rc geninfo_unexecuted_blocks=1 00:31:53.783 00:31:53.783 ' 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.783 --rc genhtml_branch_coverage=1 00:31:53.783 --rc genhtml_function_coverage=1 00:31:53.783 --rc genhtml_legend=1 00:31:53.783 --rc geninfo_all_blocks=1 00:31:53.783 --rc geninfo_unexecuted_blocks=1 00:31:53.783 00:31:53.783 ' 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.783 --rc genhtml_branch_coverage=1 00:31:53.783 --rc genhtml_function_coverage=1 00:31:53.783 --rc genhtml_legend=1 00:31:53.783 --rc geninfo_all_blocks=1 00:31:53.783 --rc geninfo_unexecuted_blocks=1 00:31:53.783 00:31:53.783 ' 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:53.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.783 --rc genhtml_branch_coverage=1 00:31:53.783 --rc genhtml_function_coverage=1 00:31:53.783 --rc genhtml_legend=1 00:31:53.783 --rc geninfo_all_blocks=1 00:31:53.783 --rc geninfo_unexecuted_blocks=1 00:31:53.783 00:31:53.783 ' 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.783 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.784 15:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.498 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:00.499 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:00.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:00.499 Found net devices under 0000:86:00.0: cvl_0_0 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:00.499 Found net devices under 0000:86:00.1: cvl_0_1 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:32:00.499 00:32:00.499 --- 10.0.0.2 ping statistics --- 00:32:00.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.499 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:00.499 00:32:00.499 --- 10.0.0.1 ping statistics --- 00:32:00.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.499 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.499 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3226747 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3226747 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3226747 ']' 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 [2024-12-06 15:49:05.654315] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.500 [2024-12-06 15:49:05.655233] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:32:00.500 [2024-12-06 15:49:05.655265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.500 [2024-12-06 15:49:05.734581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.500 [2024-12-06 15:49:05.774845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.500 [2024-12-06 15:49:05.774881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.500 [2024-12-06 15:49:05.774888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.500 [2024-12-06 15:49:05.774894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.500 [2024-12-06 15:49:05.774899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.500 [2024-12-06 15:49:05.775465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.500 [2024-12-06 15:49:05.844358] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.500 [2024-12-06 15:49:05.844558] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 [2024-12-06 15:49:05.920144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 [2024-12-06 15:49:05.948392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 malloc0 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:00.500 15:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:00.500 { 00:32:00.500 "params": { 00:32:00.500 "name": "Nvme$subsystem", 00:32:00.500 "trtype": "$TEST_TRANSPORT", 00:32:00.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.500 "adrfam": "ipv4", 00:32:00.500 "trsvcid": "$NVMF_PORT", 00:32:00.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.500 "hdgst": ${hdgst:-false}, 00:32:00.500 "ddgst": ${ddgst:-false} 00:32:00.500 }, 00:32:00.500 "method": "bdev_nvme_attach_controller" 00:32:00.500 } 00:32:00.500 EOF 00:32:00.500 )") 00:32:00.500 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:00.500 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:00.500 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:00.500 15:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:00.500 "params": { 00:32:00.500 "name": "Nvme1", 00:32:00.500 "trtype": "tcp", 00:32:00.500 "traddr": "10.0.0.2", 00:32:00.500 "adrfam": "ipv4", 00:32:00.500 "trsvcid": "4420", 00:32:00.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:00.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:00.500 "hdgst": false, 00:32:00.500 "ddgst": false 00:32:00.500 }, 00:32:00.500 "method": "bdev_nvme_attach_controller" 00:32:00.500 }' 00:32:00.500 [2024-12-06 15:49:06.046504] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:32:00.500 [2024-12-06 15:49:06.046555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226856 ] 00:32:00.500 [2024-12-06 15:49:06.119706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.500 [2024-12-06 15:49:06.160245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.500 Running I/O for 10 seconds... 00:32:02.370 8615.00 IOPS, 67.30 MiB/s [2024-12-06T14:49:09.741Z] 8678.50 IOPS, 67.80 MiB/s [2024-12-06T14:49:10.673Z] 8654.00 IOPS, 67.61 MiB/s [2024-12-06T14:49:11.609Z] 8645.75 IOPS, 67.54 MiB/s [2024-12-06T14:49:12.545Z] 8629.80 IOPS, 67.42 MiB/s [2024-12-06T14:49:13.481Z] 8637.00 IOPS, 67.48 MiB/s [2024-12-06T14:49:14.424Z] 8649.43 IOPS, 67.57 MiB/s [2024-12-06T14:49:15.797Z] 8653.50 IOPS, 67.61 MiB/s [2024-12-06T14:49:16.732Z] 8656.11 IOPS, 67.63 MiB/s [2024-12-06T14:49:16.732Z] 8658.10 IOPS, 67.64 MiB/s 00:32:10.734 Latency(us) 00:32:10.734 [2024-12-06T14:49:16.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:10.734 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:10.734 Verification LBA range: start 0x0 length 0x1000 00:32:10.734 Nvme1n1 : 10.01 8663.59 67.68 0.00 0.00 14733.53 2044.10 21096.35 00:32:10.734 [2024-12-06T14:49:16.732Z] =================================================================================================================== 00:32:10.734 [2024-12-06T14:49:16.732Z] Total : 8663.59 67.68 0.00 0.00 14733.53 2044.10 21096.35 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3228600 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:10.734 { 00:32:10.734 "params": { 00:32:10.734 "name": "Nvme$subsystem", 00:32:10.734 "trtype": "$TEST_TRANSPORT", 00:32:10.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.734 "adrfam": "ipv4", 00:32:10.734 "trsvcid": "$NVMF_PORT", 00:32:10.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.734 "hdgst": ${hdgst:-false}, 00:32:10.734 "ddgst": ${ddgst:-false} 00:32:10.734 }, 00:32:10.734 "method": "bdev_nvme_attach_controller" 00:32:10.734 } 00:32:10.734 EOF 00:32:10.734 )") 00:32:10.734 [2024-12-06 15:49:16.551823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.551853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:10.734 15:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:10.734 "params": { 00:32:10.734 "name": "Nvme1", 00:32:10.734 "trtype": "tcp", 00:32:10.734 "traddr": "10.0.0.2", 00:32:10.734 "adrfam": "ipv4", 00:32:10.734 "trsvcid": "4420", 00:32:10.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.734 "hdgst": false, 00:32:10.734 "ddgst": false 00:32:10.734 }, 00:32:10.734 "method": "bdev_nvme_attach_controller" 00:32:10.734 }' 00:32:10.734 [2024-12-06 15:49:16.563784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.563805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.575782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.575791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.587780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.587789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.592780] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:32:10.734 [2024-12-06 15:49:16.592820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228600 ] 00:32:10.734 [2024-12-06 15:49:16.599782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.599793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.611779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.611789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.623783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.623792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.635780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.635789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.647782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.647792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.659780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.734 [2024-12-06 15:49:16.659788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.734 [2024-12-06 15:49:16.667017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.735 [2024-12-06 15:49:16.671781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.735 [2024-12-06 15:49:16.671790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.735 [2024-12-06 15:49:16.683783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.735 [2024-12-06 15:49:16.683797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.735 [2024-12-06 15:49:16.695780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.735 [2024-12-06 15:49:16.695790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.735 [2024-12-06 15:49:16.707782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.735 [2024-12-06 15:49:16.707795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.735 [2024-12-06 15:49:16.708611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.735 [2024-12-06 15:49:16.719791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.735 [2024-12-06 15:49:16.719806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.731792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.731808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.743783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.743795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.755783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.755801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.767784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.767795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.779780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.779790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.791790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.791806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.803790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.803807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.815791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.815807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.827784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.827796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.839779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.839788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.851781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.851790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.863782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.863795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.875784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.875796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.887780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.887789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.899780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.899789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.911779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.911789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.923782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.923793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.935779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.935788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.947780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.947789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.959782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.959794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.971779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.971787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.994 [2024-12-06 15:49:16.983779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.994 [2024-12-06 15:49:16.983791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:16.995780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:16.995789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.007785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.007799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.019785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.019801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 Running I/O for 5 seconds... 00:32:11.254 [2024-12-06 15:49:17.035867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.035886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.049270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.049288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.063845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.063863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.075996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.076015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.089308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.089326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.103836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.103854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.116531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.116549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.131460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.131477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.145429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.145446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.159821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.159839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.172698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.172715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.187389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.187407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.201868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.201886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.216520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.216545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.231351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.231374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.254 [2024-12-06 15:49:17.245220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.254 [2024-12-06 15:49:17.245242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.259822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.259840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.270234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.270257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.285266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.285284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.299910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.299928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.311135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.311153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.325914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.325931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.340352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.340373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.355070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.355088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.369466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.369485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.384566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.384585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.400264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.400283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.415401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.415421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.429654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.429673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.444176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.444194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.459989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.460007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.471273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.471291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.485360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.485385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.514 [2024-12-06 15:49:17.499826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.514 [2024-12-06 15:49:17.499845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.511728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.511751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.526001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.526019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.540891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.540909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.555712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.555736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.568690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.568708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.583427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.583445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.595986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.596005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.609725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.609744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.624403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.624423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.636451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.636470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.649249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.649267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.664426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.664444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.679122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.679141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.693610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.693629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.708377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.708396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.722692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.722711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.737663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.737681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.752540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.752558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.774 [2024-12-06 15:49:17.767748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.774 [2024-12-06 15:49:17.767766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.780392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.780410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.793213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.793232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.808006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.808025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.819415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.819432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.833749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.833767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.848513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.848531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.863897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.863915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.875139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.875157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.889824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.889842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.904472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.904491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.919207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.919225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.932182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.932199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.945569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.945587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.960351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.960376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.975533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.975551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:17.989966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:17.989984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:18.005154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:18.005173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.033 [2024-12-06 15:49:18.019846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.033 [2024-12-06 15:49:18.019865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 16746.00 IOPS, 130.83 MiB/s [2024-12-06T14:49:18.291Z] [2024-12-06 15:49:18.032532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.032550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.047438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.047457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.061483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.061502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.076312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.076329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.091714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.091734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.105283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.105300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.119878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.119896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.131725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.131743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.145340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.145357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.159853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.159871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.172415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.172432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.185362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.185386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.200113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.200131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.216248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.216265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.231502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.231520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.245858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.245876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.260350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.260376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.293 [2024-12-06 15:49:18.275625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.293 [2024-12-06 15:49:18.275643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.289939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.289958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.304747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.304768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.319776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.319794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.333446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.333464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.347711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.347730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.361440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.361458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.376292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.376308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.391995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.392012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.403273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.403291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.417982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.418000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.432679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.432696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.447619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.447638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.461688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.461706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.476572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.476590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.491983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.492000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.504711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.504728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.520328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.520346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.531528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.531545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.553 [2024-12-06 15:49:18.545595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.553 [2024-12-06 15:49:18.545614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.560627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.560645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.575295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.575318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.588986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.589004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.603980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.603998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.614688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.614705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.629102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.629120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.643884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.643905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.654919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.654938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.669555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.669574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.684563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.684581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.700566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.700583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.715900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.715917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.728691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.728708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.743421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.813 [2024-12-06 15:49:18.743439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.813 [2024-12-06 15:49:18.756298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.814 [2024-12-06 15:49:18.756315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.814 [2024-12-06 15:49:18.772103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.814 [2024-12-06 15:49:18.772120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.814 [2024-12-06 15:49:18.787919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.814 [2024-12-06 15:49:18.787937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.814 [2024-12-06 15:49:18.800495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.814 [2024-12-06 15:49:18.800513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.813732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.813752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.828354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.828506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.844489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.844518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.859616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.859635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.871841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.871860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.885695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.885713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.900520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.900538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.915828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.915847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.929174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.929192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.944128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.944146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.960081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.960099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.973449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.973467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:18.988081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:18.988099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:19.004083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:19.004101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:19.019752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:19.019770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 16815.50 IOPS, 131.37 MiB/s [2024-12-06T14:49:19.071Z] [2024-12-06 15:49:19.031844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:19.031862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:19.045535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:19.045553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.073 [2024-12-06 15:49:19.060427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.073 [2024-12-06 15:49:19.060445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.076100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.076118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.091375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.091393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.105580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.105598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.120557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.120575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.135943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.135962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.149741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.149759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.164435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.164452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.179943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.179963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.332 [2024-12-06 15:49:19.192289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.332 [2024-12-06 15:49:19.192307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.207184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.207202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.221919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.221937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.236860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.236878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.251657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.251675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.262726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.262743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.277193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.277211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.291419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.291436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.305013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.305031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.333 [2024-12-06 15:49:19.319797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.333 [2024-12-06 15:49:19.319815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.333848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.333866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.348503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.348520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.364624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.364642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.379827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.379844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.392310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.392327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.405117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.405135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.420209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.420227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.431477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.431495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.445528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.445545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.460324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.460341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.475283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.475301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.489215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.489232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.504298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.504316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.520074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.520092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.536174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.536191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.548269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.548286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.560803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.560821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.572010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.572027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.592 [2024-12-06 15:49:19.585125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.592 [2024-12-06 15:49:19.585143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.599640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.599659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.613215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.613233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.627540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.627558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.640444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.640461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.653542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.653560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.668258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.668276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.683510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.683529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.698070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.698088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.712540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.712566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.727824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.727842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.741064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.741081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.756154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.756172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.771447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.771465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.784379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.784396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.797195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.797213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.812386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.812403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.827789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.827807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.851 [2024-12-06 15:49:19.840001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.851 [2024-12-06 15:49:19.840019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.853510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.853528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.867985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.868002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.878179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.878197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.892870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.892888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.907568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.907586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.921352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.921376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.936094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.936111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.951606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.951623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.966133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.966151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.981270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.981288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:19.996061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:19.996078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:20.012289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:20.012307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:20.028172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:20.028191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 16810.33 IOPS, 131.33 MiB/s [2024-12-06T14:49:20.108Z] [2024-12-06 15:49:20.043674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:20.043692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:20.054840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:20.054859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:20.070506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:20.070525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:20.084830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:20.084854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.110 [2024-12-06 15:49:20.099986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.110 [2024-12-06 15:49:20.100003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.111525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.111543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.125924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.125941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.141177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.141195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.155598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.155616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.168485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.168502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.184297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.184324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.199937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.199955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.212469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.369 [2024-12-06 15:49:20.212487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.369 [2024-12-06 15:49:20.227182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.227199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.241796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.241815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.256416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.256434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.272016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.272035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.284419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.284438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.300406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.300426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.315253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.315273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.329021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.329040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.343836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.343854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.370 [2024-12-06 15:49:20.355512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.370 [2024-12-06 15:49:20.355531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.369610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.369633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.384577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.384595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.400076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.400094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.415880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.415899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.428883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.428901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.443822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.443841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.457414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.457439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.472160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.472178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.487447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.629 [2024-12-06 15:49:20.487465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.629 [2024-12-06 15:49:20.501355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.501383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.516376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.516395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.531854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.531873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.544690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.544708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.560215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.560239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.575199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.575218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.589840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.589859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.604638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.604657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.630 [2024-12-06 15:49:20.619992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.630 [2024-12-06 15:49:20.620010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.630962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.630980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.645699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.645717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.660485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.660503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.675451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.675471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.689792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.689810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.704785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.704805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.719595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.719614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.732240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.732262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.745517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.745535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.759858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.759877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.771357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.771381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.785653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.785671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.800620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.800637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.815138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.815157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.828866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.828884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.843476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.843494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.857149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.857167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.872041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.872060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.889 [2024-12-06 15:49:20.883943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.889 [2024-12-06 15:49:20.883961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:20.897995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:20.898013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:20.912901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:20.912918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:20.928157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:20.928175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:20.944121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:20.944138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:20.959518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:20.959536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:20.973690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:20.973708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:20.988143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:20.988160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.003569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.003587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.017651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.017669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 16788.75 IOPS, 131.16 MiB/s [2024-12-06T14:49:21.146Z] [2024-12-06 15:49:21.032412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.032429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.047701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.047720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.061022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.061039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.075545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.075563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.088378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.088411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.101051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.101069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.112043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.112061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.125984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.126002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.148 [2024-12-06 15:49:21.140558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.148 [2024-12-06 15:49:21.140576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.155746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.155764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.167882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.167900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.181945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.181962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.197036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.197054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.212393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.212411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.227325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.227343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.240883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.240901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.255232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.255251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.407 [2024-12-06 15:49:21.267512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.407 [2024-12-06 15:49:21.267531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.281520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.281538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.296762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.296780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.311534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.311552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.325342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.325364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.339310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.339329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.350942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.350960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.365535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.365553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.380278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.380295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.408 [2024-12-06 15:49:21.395764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.408 [2024-12-06 15:49:21.395781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.409973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.409991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.424731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.424748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.439794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.439813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.452944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.452962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.467997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.468016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.478659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.478677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.493291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.493309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.507818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.507836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.520036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.520058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.533469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.533487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.548637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.548654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.563713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.563733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.577512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.577530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.592385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.592403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.607787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.607805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.621485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.621502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.636632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.636650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.667 [2024-12-06 15:49:21.651409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.667 [2024-12-06 15:49:21.651426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.665718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.665736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.680524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.680542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.696012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.696032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.708325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.708343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.721636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.721655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.736648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.736669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.752187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.752206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.767705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.767725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.778796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.778815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.793424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.793448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.807715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.807735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.821998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.822016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.836719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.836736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.852516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.852537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.866978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.866996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.880664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.880681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.895437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.895454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.927 [2024-12-06 15:49:21.908486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.927 [2024-12-06 15:49:21.908505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:21.924375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:21.924394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:21.939640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:21.939660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:21.953887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:21.953906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:21.968695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:21.968714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:21.983286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:21.983304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:21.995593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:21.995611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.009385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.009402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.024218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.024236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 16784.60 IOPS, 131.13 MiB/s [2024-12-06T14:49:22.185Z] [2024-12-06 15:49:22.035169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.035187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 00:32:16.187 Latency(us) 00:32:16.187 [2024-12-06T14:49:22.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.187 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:16.187 Nvme1n1 : 5.01 16786.97 131.15 0.00 0.00 7617.94 2012.89 12857.54 00:32:16.187 [2024-12-06T14:49:22.185Z] =================================================================================================================== 00:32:16.187 [2024-12-06T14:49:22.185Z] Total : 16786.97 131.15 0.00 0.00 7617.94 2012.89 12857.54 00:32:16.187 [2024-12-06 15:49:22.043791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.043807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.055786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.055802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.067796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.067809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.079796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.079812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.091786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.091799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.103791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.103805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.115784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.115796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.127785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.127799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.139783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.187 [2024-12-06 15:49:22.139796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.187 [2024-12-06 15:49:22.151782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.188 [2024-12-06 15:49:22.151793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.188 [2024-12-06 15:49:22.163781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.188 [2024-12-06 15:49:22.163790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.188 [2024-12-06 15:49:22.175792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.188 [2024-12-06 15:49:22.175803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.447 [2024-12-06 15:49:22.187782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.447 [2024-12-06 15:49:22.187792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.447 [2024-12-06 15:49:22.199781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:16.447 [2024-12-06 15:49:22.199790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3228600) - No such process 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3228600 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.447 delay0 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.447 15:49:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:16.447 [2024-12-06 15:49:22.352545] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:24.574 Initializing NVMe Controllers 00:32:24.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:24.574 Initialization complete. Launching workers. 00:32:24.574 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 7006 00:32:24.574 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7293, failed to submit 33 00:32:24.574 success 7140, unsuccessful 153, failed 0 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.574 rmmod nvme_tcp 00:32:24.574 rmmod nvme_fabrics 00:32:24.574 rmmod nvme_keyring 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3226747 ']' 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3226747 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3226747 ']' 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3226747 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3226747 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3226747' 00:32:24.574 killing process with pid 3226747 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3226747 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3226747 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.574 15:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.951 00:32:25.951 real 0m32.208s 00:32:25.951 user 0m41.729s 00:32:25.951 sys 0m12.854s 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.951 ************************************ 00:32:25.951 END TEST nvmf_zcopy 00:32:25.951 ************************************ 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.951 ************************************ 00:32:25.951 START TEST nvmf_nmic 00:32:25.951 ************************************ 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:25.951 * Looking for test storage... 00:32:25.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:25.951 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:26.210 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:26.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.211 --rc genhtml_branch_coverage=1 00:32:26.211 --rc genhtml_function_coverage=1 00:32:26.211 --rc genhtml_legend=1 00:32:26.211 --rc geninfo_all_blocks=1 00:32:26.211 --rc geninfo_unexecuted_blocks=1 00:32:26.211 00:32:26.211 ' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:26.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.211 --rc genhtml_branch_coverage=1 00:32:26.211 --rc genhtml_function_coverage=1 00:32:26.211 --rc genhtml_legend=1 00:32:26.211 --rc geninfo_all_blocks=1 00:32:26.211 --rc geninfo_unexecuted_blocks=1 00:32:26.211 00:32:26.211 ' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:26.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.211 --rc genhtml_branch_coverage=1 00:32:26.211 --rc genhtml_function_coverage=1 00:32:26.211 --rc genhtml_legend=1 00:32:26.211 --rc geninfo_all_blocks=1 00:32:26.211 --rc geninfo_unexecuted_blocks=1 00:32:26.211 00:32:26.211 ' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:26.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.211 --rc genhtml_branch_coverage=1 00:32:26.211 --rc genhtml_function_coverage=1 00:32:26.211 --rc genhtml_legend=1 00:32:26.211 --rc geninfo_all_blocks=1 00:32:26.211 --rc geninfo_unexecuted_blocks=1 00:32:26.211 00:32:26.211 ' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.211 15:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.211 15:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:26.211 15:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:26.211 15:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.211 15:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.776 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:32.777 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:32.777 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:32.777 Found net devices under 0000:86:00.0: cvl_0_0 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:32.777 Found net devices under 0000:86:00.1: cvl_0_1 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:32:32.777 00:32:32.777 --- 10.0.0.2 ping statistics --- 00:32:32.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.777 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:32:32.777 00:32:32.777 --- 10.0.0.1 ping statistics --- 00:32:32.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.777 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3234087 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3234087 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:32.777 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3234087 ']' 00:32:32.778 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.778 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.778 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.778 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.778 15:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 [2024-12-06 15:49:37.972040] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.778 [2024-12-06 15:49:37.973035] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:32:32.778 [2024-12-06 15:49:37.973077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.778 [2024-12-06 15:49:38.052125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.778 [2024-12-06 15:49:38.098876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.778 [2024-12-06 15:49:38.098912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.778 [2024-12-06 15:49:38.098919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.778 [2024-12-06 15:49:38.098925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.778 [2024-12-06 15:49:38.098932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.778 [2024-12-06 15:49:38.100453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.778 [2024-12-06 15:49:38.100561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.778 [2024-12-06 15:49:38.100584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.778 [2024-12-06 15:49:38.100585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.778 [2024-12-06 15:49:38.170321] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.778 [2024-12-06 15:49:38.170568] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.778 [2024-12-06 15:49:38.171176] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:32.778 [2024-12-06 15:49:38.171322] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:32.778 [2024-12-06 15:49:38.171391] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 [2024-12-06 15:49:38.233437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 Malloc0 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 [2024-12-06 15:49:38.321564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:32.778 test case1: single bdev can't be used in multiple subsystems 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 [2024-12-06 15:49:38.353105] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:32.778 [2024-12-06 15:49:38.353128] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:32.778 [2024-12-06 15:49:38.353135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.778 request: 00:32:32.778 { 00:32:32.778 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:32.778 "namespace": { 00:32:32.778 "bdev_name": "Malloc0", 00:32:32.778 "no_auto_visible": false, 00:32:32.778 "hide_metadata": false 00:32:32.778 }, 00:32:32.778 "method": "nvmf_subsystem_add_ns", 00:32:32.778 "req_id": 1 00:32:32.778 } 00:32:32.778 Got JSON-RPC error response 00:32:32.778 response: 00:32:32.778 { 00:32:32.778 "code": -32602, 00:32:32.778 "message": "Invalid parameters" 00:32:32.778 } 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:32.778 Adding namespace failed - expected result. 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:32.778 test case2: host connect to nvmf target in multiple paths 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.778 [2024-12-06 15:49:38.365208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:32.778 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:33.036 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:33.036 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:33.036 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:33.036 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:33.036 15:49:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:34.995 15:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:34.995 15:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:34.996 15:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:34.996 15:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:34.996 15:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:34.996 15:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:34.996 15:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:34.996 [global] 00:32:34.996 thread=1 00:32:34.996 invalidate=1 00:32:34.996 rw=write 00:32:34.996 time_based=1 00:32:34.996 runtime=1 00:32:34.996 ioengine=libaio 00:32:34.996 direct=1 00:32:34.996 bs=4096 00:32:34.996 iodepth=1 00:32:34.996 norandommap=0 00:32:34.996 numjobs=1 00:32:34.996 00:32:34.996 verify_dump=1 00:32:34.996 verify_backlog=512 00:32:34.996 verify_state_save=0 00:32:34.996 do_verify=1 00:32:34.996 verify=crc32c-intel 00:32:34.996 [job0] 00:32:34.996 filename=/dev/nvme0n1 00:32:34.996 Could not set queue depth (nvme0n1) 00:32:35.339 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:35.339 fio-3.35 00:32:35.339 Starting 1 thread 00:32:36.717 00:32:36.717 job0: (groupid=0, jobs=1): err= 0: pid=3234795: Fri Dec 6 15:49:42 2024 00:32:36.717 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:32:36.717 slat (nsec): min=9859, max=23804, avg=22417.09, stdev=2827.31 00:32:36.717 clat (usec): min=40808, max=41381, avg=40980.04, stdev=107.97 00:32:36.717 lat (usec): min=40832, max=41391, avg=41002.45, stdev=105.68 00:32:36.717 clat percentiles (usec): 00:32:36.717 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:36.717 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:36.717 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:36.717 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:36.717 | 99.99th=[41157] 00:32:36.717 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:32:36.717 slat (nsec): min=9879, max=43013, avg=11298.24, stdev=2249.77 00:32:36.717 clat (usec): min=134, max=1259, avg=145.18, stdev=50.91 00:32:36.717 lat (usec): min=144, max=1270, avg=156.48, stdev=51.28 00:32:36.717 clat percentiles (usec): 00:32:36.717 | 1.00th=[ 137], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:32:36.717 | 30.00th=[ 141], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:32:36.717 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 151], 00:32:36.717 | 99.00th=[ 194], 99.50th=[ 247], 99.90th=[ 1254], 99.95th=[ 1254], 00:32:36.717 | 99.99th=[ 1254] 00:32:36.717 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:36.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:36.717 lat (usec) : 250=95.33%, 500=0.19% 00:32:36.717 lat (msec) : 2=0.19%, 50=4.30% 00:32:36.717 cpu : usr=0.39%, sys=0.88%, ctx=535, majf=0, minf=1 00:32:36.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.717 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:36.717 00:32:36.717 Run status group 0 (all jobs): 00:32:36.717 READ: bw=89.8KiB/s (91.9kB/s), 89.8KiB/s-89.8KiB/s (91.9kB/s-91.9kB/s), io=92.0KiB (94.2kB), run=1025-1025msec 00:32:36.717 WRITE: bw=1998KiB/s (2046kB/s), 1998KiB/s-1998KiB/s (2046kB/s-2046kB/s), io=2048KiB (2097kB), run=1025-1025msec 00:32:36.717 00:32:36.717 Disk stats (read/write): 00:32:36.717 nvme0n1: ios=69/512, merge=0/0, ticks=806/67, in_queue=873, util=91.48% 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:36.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.717 rmmod nvme_tcp 00:32:36.717 rmmod nvme_fabrics 00:32:36.717 rmmod nvme_keyring 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3234087 ']' 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3234087 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3234087 ']' 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3234087 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3234087 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3234087' 00:32:36.717 killing process with pid 3234087 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3234087 00:32:36.717 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3234087 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.977 15:49:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.512 15:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.512 00:32:39.512 real 0m13.161s 00:32:39.512 user 0m24.400s 00:32:39.512 sys 0m6.026s 00:32:39.512 15:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.512 15:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.512 ************************************ 00:32:39.512 END TEST nvmf_nmic 00:32:39.513 ************************************ 00:32:39.513 15:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:39.513 15:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:39.513 15:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.513 15:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.513 ************************************ 00:32:39.513 START TEST nvmf_fio_target 00:32:39.513 ************************************ 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:39.513 * Looking for test storage... 00:32:39.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:39.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.513 --rc genhtml_branch_coverage=1 00:32:39.513 --rc genhtml_function_coverage=1 00:32:39.513 --rc genhtml_legend=1 00:32:39.513 --rc geninfo_all_blocks=1 00:32:39.513 --rc geninfo_unexecuted_blocks=1 00:32:39.513 00:32:39.513 ' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:39.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.513 --rc genhtml_branch_coverage=1 00:32:39.513 --rc genhtml_function_coverage=1 00:32:39.513 --rc genhtml_legend=1 00:32:39.513 --rc geninfo_all_blocks=1 00:32:39.513 --rc geninfo_unexecuted_blocks=1 00:32:39.513 00:32:39.513 ' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:39.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.513 --rc genhtml_branch_coverage=1 00:32:39.513 --rc genhtml_function_coverage=1 00:32:39.513 --rc genhtml_legend=1 00:32:39.513 --rc geninfo_all_blocks=1 00:32:39.513 --rc geninfo_unexecuted_blocks=1 00:32:39.513 00:32:39.513 ' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:39.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.513 --rc genhtml_branch_coverage=1 00:32:39.513 --rc genhtml_function_coverage=1 00:32:39.513 --rc genhtml_legend=1 00:32:39.513 --rc geninfo_all_blocks=1 00:32:39.513 --rc geninfo_unexecuted_blocks=1 00:32:39.513 00:32:39.513 ' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.513 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.514 15:49:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.080 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:46.081 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:46.081 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:46.081 Found net devices under 0000:86:00.0: cvl_0_0 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:46.081 Found net devices under 0000:86:00.1: cvl_0_1 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.081 15:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.081 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.081 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.081 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.081 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:32:46.081 00:32:46.081 --- 10.0.0.2 ping statistics --- 00:32:46.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.081 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:32:46.081 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:32:46.081 00:32:46.081 --- 10.0.0.1 ping statistics --- 00:32:46.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.081 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:32:46.081 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3238489 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3238489 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3238489 ']' 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.082 15:49:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.082 [2024-12-06 15:49:51.182844] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:46.082 [2024-12-06 15:49:51.183796] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:32:46.082 [2024-12-06 15:49:51.183833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.082 [2024-12-06 15:49:51.264672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.082 [2024-12-06 15:49:51.307100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.082 [2024-12-06 15:49:51.307135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.082 [2024-12-06 15:49:51.307142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.082 [2024-12-06 15:49:51.307148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.082 [2024-12-06 15:49:51.307153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.082 [2024-12-06 15:49:51.308748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.082 [2024-12-06 15:49:51.308856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.082 [2024-12-06 15:49:51.308964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.082 [2024-12-06 15:49:51.308964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.082 [2024-12-06 15:49:51.378517] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:46.082 [2024-12-06 15:49:51.379053] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.082 [2024-12-06 15:49:51.379421] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:46.082 [2024-12-06 15:49:51.379627] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:46.082 [2024-12-06 15:49:51.379681] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:46.082 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.082 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:46.082 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.082 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:46.082 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:46.082 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.082 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:46.341 [2024-12-06 15:49:52.237813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.341 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.600 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:46.600 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.860 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:46.860 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.119 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:47.119 15:49:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.378 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:47.378 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:47.378 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.636 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:47.636 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.896 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:47.896 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:48.155 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:48.155 15:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:48.155 15:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:48.413 15:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:48.413 15:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.672 15:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:48.672 15:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:48.931 15:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.931 [2024-12-06 15:49:54.837623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.931 15:49:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:49.189 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:49.446 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:49.704 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:49.704 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:49.704 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:49.704 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:49.704 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:49.704 15:49:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:51.602 15:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:51.602 15:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:51.602 15:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:51.602 15:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:51.602 15:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:51.602 15:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:51.602 15:49:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:51.602 [global] 00:32:51.602 thread=1 00:32:51.602 invalidate=1 00:32:51.602 rw=write 00:32:51.602 time_based=1 00:32:51.602 runtime=1 00:32:51.602 ioengine=libaio 00:32:51.602 direct=1 00:32:51.602 bs=4096 00:32:51.602 iodepth=1 00:32:51.602 norandommap=0 00:32:51.602 numjobs=1 00:32:51.602 00:32:51.602 verify_dump=1 00:32:51.602 verify_backlog=512 00:32:51.602 verify_state_save=0 00:32:51.602 do_verify=1 00:32:51.602 verify=crc32c-intel 00:32:51.602 [job0] 00:32:51.602 filename=/dev/nvme0n1 00:32:51.602 [job1] 00:32:51.602 filename=/dev/nvme0n2 00:32:51.602 [job2] 00:32:51.602 filename=/dev/nvme0n3 00:32:51.602 [job3] 00:32:51.602 filename=/dev/nvme0n4 00:32:51.602 Could not set queue depth (nvme0n1) 00:32:51.602 Could not set queue depth (nvme0n2) 00:32:51.602 Could not set queue depth (nvme0n3) 00:32:51.602 Could not set queue depth (nvme0n4) 00:32:51.860 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.860 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.860 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.860 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.860 fio-3.35 00:32:51.860 Starting 4 threads 00:32:53.233 00:32:53.233 job0: (groupid=0, jobs=1): err= 0: pid=3239681: Fri Dec 6 15:49:59 2024 00:32:53.233 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:53.233 slat (nsec): min=7056, max=39878, avg=8777.58, stdev=1693.93 00:32:53.233 clat (usec): min=190, max=447, avg=238.47, stdev=27.40 00:32:53.233 lat (usec): min=198, max=472, avg=247.24, stdev=27.61 00:32:53.233 clat percentiles (usec): 00:32:53.233 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:32:53.233 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:32:53.233 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 281], 00:32:53.233 | 99.00th=[ 314], 99.50th=[ 424], 99.90th=[ 441], 99.95th=[ 445], 00:32:53.233 | 99.99th=[ 449] 00:32:53.233 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(9.91MiB/1001msec); 0 zone resets 00:32:53.233 slat (nsec): min=10080, max=46183, avg=11884.07, stdev=1889.11 00:32:53.233 clat (usec): min=130, max=1016, avg=177.22, stdev=36.81 00:32:53.233 lat (usec): min=142, max=1028, avg=189.11, stdev=36.88 00:32:53.233 clat percentiles (usec): 00:32:53.233 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:32:53.233 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:32:53.233 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 212], 95.00th=[ 273], 00:32:53.233 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 302], 99.95th=[ 322], 00:32:53.233 | 99.99th=[ 1020] 00:32:53.233 bw ( KiB/s): min=10976, max=10976, per=40.41%, avg=10976.00, stdev= 0.00, samples=1 00:32:53.233 iops : min= 2744, max= 2744, avg=2744.00, stdev= 0.00, samples=1 00:32:53.233 lat (usec) : 250=86.61%, 500=13.37% 00:32:53.233 lat (msec) : 2=0.02% 00:32:53.233 cpu : usr=4.50%, sys=6.90%, ctx=4585, majf=0, minf=1 00:32:53.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.233 issued rwts: total=2048,2537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.233 job1: (groupid=0, jobs=1): err= 0: pid=3239682: Fri Dec 6 15:49:59 2024 00:32:53.234 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:32:53.234 slat (nsec): min=10464, max=24223, avg=19534.52, stdev=4323.04 00:32:53.234 clat (usec): min=40708, max=41924, avg=41004.41, stdev=217.31 00:32:53.234 lat (usec): min=40718, max=41946, avg=41023.94, stdev=218.18 00:32:53.234 clat percentiles (usec): 00:32:53.234 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:32:53.234 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:53.234 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:53.234 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:53.234 | 99.99th=[41681] 00:32:53.234 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:32:53.234 slat (nsec): min=10702, max=37451, avg=11892.91, stdev=1934.18 00:32:53.234 clat (usec): min=146, max=319, avg=170.66, stdev=14.54 00:32:53.234 lat (usec): min=158, max=357, avg=182.56, stdev=15.17 00:32:53.234 clat percentiles (usec): 00:32:53.234 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:32:53.234 | 30.00th=[ 165], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:32:53.234 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:32:53.234 | 99.00th=[ 223], 99.50th=[ 243], 99.90th=[ 322], 99.95th=[ 322], 00:32:53.234 | 99.99th=[ 322] 00:32:53.234 bw ( KiB/s): min= 4096, max= 4096, per=15.08%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.234 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.234 lat (usec) : 250=95.33%, 500=0.37% 00:32:53.234 lat (msec) : 50=4.30% 00:32:53.234 cpu : usr=0.19%, sys=0.67%, ctx=535, majf=0, minf=1 00:32:53.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.234 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.234 job2: (groupid=0, jobs=1): err= 0: pid=3239683: Fri Dec 6 15:49:59 2024 00:32:53.234 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:32:53.234 slat (nsec): min=6212, max=25558, avg=7457.37, stdev=1857.80 00:32:53.234 clat (usec): min=186, max=41029, avg=700.23, stdev=4192.61 00:32:53.234 lat (usec): min=193, max=41052, avg=707.69, stdev=4193.97 00:32:53.234 clat percentiles (usec): 00:32:53.234 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:32:53.234 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 255], 60.00th=[ 273], 00:32:53.234 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 334], 00:32:53.234 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:53.234 | 99.99th=[41157] 00:32:53.234 write: IOPS=1481, BW=5926KiB/s (6068kB/s)(5932KiB/1001msec); 0 zone resets 00:32:53.234 slat (nsec): min=9307, max=36806, avg=11137.65, stdev=1793.70 00:32:53.234 clat (usec): min=133, max=327, avg=170.95, stdev=16.22 00:32:53.234 lat (usec): min=143, max=363, avg=182.09, stdev=16.78 00:32:53.234 clat percentiles (usec): 00:32:53.234 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:32:53.234 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:32:53.234 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 196], 00:32:53.234 | 99.00th=[ 241], 99.50th=[ 241], 99.90th=[ 273], 99.95th=[ 326], 00:32:53.234 | 99.99th=[ 326] 00:32:53.234 bw ( KiB/s): min= 4096, max= 4096, per=15.08%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.234 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.234 lat (usec) : 250=78.18%, 500=21.34%, 750=0.04% 00:32:53.234 lat (msec) : 50=0.44% 00:32:53.234 cpu : usr=1.40%, sys=2.30%, ctx=2507, majf=0, minf=1 00:32:53.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.234 issued rwts: total=1024,1483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.234 job3: (groupid=0, jobs=1): err= 0: pid=3239684: Fri Dec 6 15:49:59 2024 00:32:53.234 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:53.234 slat (nsec): min=6125, max=27968, avg=7468.10, stdev=1092.18 00:32:53.234 clat (usec): min=190, max=474, avg=244.04, stdev=34.51 00:32:53.234 lat (usec): min=197, max=481, avg=251.50, stdev=34.43 00:32:53.234 clat percentiles (usec): 00:32:53.234 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 225], 00:32:53.234 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 237], 00:32:53.234 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 289], 95.00th=[ 314], 00:32:53.234 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 461], 99.95th=[ 465], 00:32:53.234 | 99.99th=[ 474] 00:32:53.234 write: IOPS=2521, BW=9.85MiB/s (10.3MB/s)(9.86MiB/1001msec); 0 zone resets 00:32:53.234 slat (nsec): min=9027, max=39057, avg=10379.61, stdev=1262.07 00:32:53.234 clat (usec): min=123, max=441, avg=177.81, stdev=28.52 00:32:53.234 lat (usec): min=133, max=480, avg=188.19, stdev=28.68 00:32:53.234 clat percentiles (usec): 00:32:53.234 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:32:53.234 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:32:53.234 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 241], 95.00th=[ 243], 00:32:53.234 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 326], 99.95th=[ 347], 00:32:53.234 | 99.99th=[ 441] 00:32:53.234 bw ( KiB/s): min=10056, max=10056, per=37.02%, avg=10056.00, stdev= 0.00, samples=1 00:32:53.234 iops : min= 2514, max= 2514, avg=2514.00, stdev= 0.00, samples=1 00:32:53.234 lat (usec) : 250=89.63%, 500=10.37% 00:32:53.234 cpu : usr=2.70%, sys=3.80%, ctx=4572, majf=0, minf=1 00:32:53.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.234 issued rwts: total=2048,2524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.234 00:32:53.234 Run status group 0 (all jobs): 00:32:53.234 READ: bw=19.3MiB/s (20.3MB/s), 88.5KiB/s-8184KiB/s (90.7kB/s-8380kB/s), io=20.1MiB (21.1MB), run=1001-1039msec 00:32:53.234 WRITE: bw=26.5MiB/s (27.8MB/s), 1971KiB/s-9.90MiB/s (2018kB/s-10.4MB/s), io=27.6MiB (28.9MB), run=1001-1039msec 00:32:53.234 00:32:53.234 Disk stats (read/write): 00:32:53.234 nvme0n1: ios=1865/2048, merge=0/0, ticks=406/335, in_queue=741, util=86.67% 00:32:53.234 nvme0n2: ios=38/512, merge=0/0, ticks=757/85, in_queue=842, util=87.08% 00:32:53.234 nvme0n3: ios=774/1024, merge=0/0, ticks=902/175, in_queue=1077, util=91.44% 00:32:53.234 nvme0n4: ios=1800/2048, merge=0/0, ticks=433/352, in_queue=785, util=89.68% 00:32:53.234 15:49:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:53.234 [global] 00:32:53.234 thread=1 00:32:53.234 invalidate=1 00:32:53.234 rw=randwrite 00:32:53.234 time_based=1 00:32:53.234 runtime=1 00:32:53.234 ioengine=libaio 00:32:53.234 direct=1 00:32:53.234 bs=4096 00:32:53.234 iodepth=1 00:32:53.234 norandommap=0 00:32:53.234 numjobs=1 00:32:53.234 00:32:53.234 verify_dump=1 00:32:53.234 verify_backlog=512 00:32:53.234 verify_state_save=0 00:32:53.234 do_verify=1 00:32:53.234 verify=crc32c-intel 00:32:53.234 [job0] 00:32:53.234 filename=/dev/nvme0n1 00:32:53.234 [job1] 00:32:53.234 filename=/dev/nvme0n2 00:32:53.234 [job2] 00:32:53.234 filename=/dev/nvme0n3 00:32:53.234 [job3] 00:32:53.234 filename=/dev/nvme0n4 00:32:53.234 Could not set queue depth (nvme0n1) 00:32:53.234 Could not set queue depth (nvme0n2) 00:32:53.234 Could not set queue depth (nvme0n3) 00:32:53.234 Could not set queue depth (nvme0n4) 00:32:53.493 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.493 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.493 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.493 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:53.493 fio-3.35 00:32:53.493 Starting 4 threads 00:32:54.871 00:32:54.871 job0: (groupid=0, jobs=1): err= 0: pid=3240050: Fri Dec 6 15:50:00 2024 00:32:54.871 read: IOPS=2219, BW=8879KiB/s (9092kB/s)(8888KiB/1001msec) 00:32:54.871 slat (nsec): min=6328, max=30180, avg=7358.62, stdev=1117.42 00:32:54.871 clat (usec): min=182, max=528, avg=240.61, stdev=40.72 00:32:54.871 lat (usec): min=189, max=535, avg=247.97, stdev=40.74 00:32:54.871 clat percentiles (usec): 00:32:54.871 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 215], 00:32:54.871 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 245], 00:32:54.871 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 310], 00:32:54.871 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 498], 99.95th=[ 502], 00:32:54.871 | 99.99th=[ 529] 00:32:54.871 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:54.871 slat (nsec): min=9236, max=46554, avg=10566.49, stdev=1431.08 00:32:54.871 clat (usec): min=119, max=2815, avg=159.69, stdev=60.69 00:32:54.871 lat (usec): min=129, max=2827, avg=170.26, stdev=60.80 00:32:54.871 clat percentiles (usec): 00:32:54.871 | 1.00th=[ 126], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 139], 00:32:54.871 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:32:54.871 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 200], 95.00th=[ 223], 00:32:54.871 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 404], 99.95th=[ 469], 00:32:54.871 | 99.99th=[ 2802] 00:32:54.871 bw ( KiB/s): min=11040, max=11040, per=45.51%, avg=11040.00, stdev= 0.00, samples=1 00:32:54.871 iops : min= 2760, max= 2760, avg=2760.00, stdev= 0.00, samples=1 00:32:54.871 lat (usec) : 250=85.07%, 500=14.87%, 750=0.04% 00:32:54.871 lat (msec) : 4=0.02% 00:32:54.871 cpu : usr=2.10%, sys=4.60%, ctx=4785, majf=0, minf=1 00:32:54.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.871 issued rwts: total=2222,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.871 job1: (groupid=0, jobs=1): err= 0: pid=3240051: Fri Dec 6 15:50:00 2024 00:32:54.871 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:32:54.871 slat (nsec): min=9473, max=24199, avg=23082.82, stdev=3064.23 00:32:54.871 clat (usec): min=40900, max=41979, avg=41020.24, stdev=217.82 00:32:54.871 lat (usec): min=40924, max=41989, avg=41043.32, stdev=214.84 00:32:54.871 clat percentiles (usec): 00:32:54.871 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:54.871 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:54.871 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:54.871 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:54.871 | 99.99th=[42206] 00:32:54.871 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:54.871 slat (nsec): min=9505, max=40017, avg=10566.90, stdev=1721.35 00:32:54.871 clat (usec): min=152, max=355, avg=179.55, stdev=22.94 00:32:54.871 lat (usec): min=162, max=395, avg=190.12, stdev=23.48 00:32:54.871 clat percentiles (usec): 00:32:54.871 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 163], 00:32:54.871 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:32:54.871 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 241], 00:32:54.871 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 355], 99.95th=[ 355], 00:32:54.871 | 99.99th=[ 355] 00:32:54.871 bw ( KiB/s): min= 4096, max= 4096, per=16.88%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.871 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.871 lat (usec) : 250=95.51%, 500=0.37% 00:32:54.871 lat (msec) : 50=4.12% 00:32:54.872 cpu : usr=0.10%, sys=0.70%, ctx=535, majf=0, minf=1 00:32:54.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.872 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.872 job2: (groupid=0, jobs=1): err= 0: pid=3240052: Fri Dec 6 15:50:00 2024 00:32:54.872 read: IOPS=305, BW=1220KiB/s (1249kB/s)(1236KiB/1013msec) 00:32:54.872 slat (nsec): min=6334, max=31310, avg=10624.91, stdev=4249.97 00:32:54.872 clat (usec): min=209, max=43749, avg=2909.71, stdev=9991.90 00:32:54.872 lat (usec): min=227, max=43773, avg=2920.33, stdev=9992.42 00:32:54.872 clat percentiles (usec): 00:32:54.872 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 260], 00:32:54.872 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:32:54.872 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[40633], 00:32:54.872 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:32:54.872 | 99.99th=[43779] 00:32:54.872 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:32:54.872 slat (nsec): min=9743, max=54556, avg=13208.93, stdev=6190.68 00:32:54.872 clat (usec): min=136, max=356, avg=191.03, stdev=25.05 00:32:54.872 lat (usec): min=146, max=394, avg=204.24, stdev=25.42 00:32:54.872 clat percentiles (usec): 00:32:54.872 | 1.00th=[ 145], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:32:54.872 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:32:54.872 | 70.00th=[ 200], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 235], 00:32:54.872 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 359], 99.95th=[ 359], 00:32:54.872 | 99.99th=[ 359] 00:32:54.872 bw ( KiB/s): min= 4096, max= 4096, per=16.88%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.872 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.872 lat (usec) : 250=66.63%, 500=30.94% 00:32:54.872 lat (msec) : 50=2.44% 00:32:54.872 cpu : usr=0.69%, sys=0.79%, ctx=823, majf=0, minf=1 00:32:54.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.872 issued rwts: total=309,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.872 job3: (groupid=0, jobs=1): err= 0: pid=3240053: Fri Dec 6 15:50:00 2024 00:32:54.872 read: IOPS=2155, BW=8623KiB/s (8830kB/s)(8632KiB/1001msec) 00:32:54.872 slat (nsec): min=6135, max=25629, avg=7237.33, stdev=1328.75 00:32:54.872 clat (usec): min=172, max=36378, avg=260.09, stdev=782.12 00:32:54.872 lat (usec): min=179, max=36389, avg=267.33, stdev=782.22 00:32:54.872 clat percentiles (usec): 00:32:54.872 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:32:54.872 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:32:54.872 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 310], 00:32:54.872 | 99.00th=[ 359], 99.50th=[ 433], 99.90th=[ 2057], 99.95th=[ 2999], 00:32:54.872 | 99.99th=[36439] 00:32:54.872 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:54.872 slat (nsec): min=8724, max=56908, avg=10158.13, stdev=2521.06 00:32:54.872 clat (usec): min=117, max=1111, avg=151.31, stdev=33.42 00:32:54.872 lat (usec): min=127, max=1121, avg=161.46, stdev=34.03 00:32:54.872 clat percentiles (usec): 00:32:54.872 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:32:54.872 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:32:54.872 | 70.00th=[ 151], 80.00th=[ 172], 90.00th=[ 188], 95.00th=[ 210], 00:32:54.872 | 99.00th=[ 235], 99.50th=[ 253], 99.90th=[ 367], 99.95th=[ 478], 00:32:54.872 | 99.99th=[ 1106] 00:32:54.872 bw ( KiB/s): min= 8896, max= 8896, per=36.67%, avg=8896.00, stdev= 0.00, samples=1 00:32:54.872 iops : min= 2224, max= 2224, avg=2224.00, stdev= 0.00, samples=1 00:32:54.872 lat (usec) : 250=88.26%, 500=11.62%, 1000=0.02% 00:32:54.872 lat (msec) : 2=0.04%, 4=0.04%, 50=0.02% 00:32:54.872 cpu : usr=2.40%, sys=4.10%, ctx=4718, majf=0, minf=2 00:32:54.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.872 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.872 issued rwts: total=2158,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.872 00:32:54.872 Run status group 0 (all jobs): 00:32:54.872 READ: bw=18.2MiB/s (19.0MB/s), 87.6KiB/s-8879KiB/s (89.7kB/s-9092kB/s), io=18.4MiB (19.3MB), run=1001-1013msec 00:32:54.872 WRITE: bw=23.7MiB/s (24.8MB/s), 2022KiB/s-9.99MiB/s (2070kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1013msec 00:32:54.872 00:32:54.872 Disk stats (read/write): 00:32:54.872 nvme0n1: ios=2014/2048, merge=0/0, ticks=887/328, in_queue=1215, util=97.60% 00:32:54.872 nvme0n2: ios=58/512, merge=0/0, ticks=1079/94, in_queue=1173, util=97.57% 00:32:54.872 nvme0n3: ios=348/512, merge=0/0, ticks=1414/93, in_queue=1507, util=97.51% 00:32:54.872 nvme0n4: ios=1891/2048, merge=0/0, ticks=485/302, in_queue=787, util=89.75% 00:32:54.872 15:50:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:54.872 [global] 00:32:54.872 thread=1 00:32:54.872 invalidate=1 00:32:54.872 rw=write 00:32:54.872 time_based=1 00:32:54.872 runtime=1 00:32:54.872 ioengine=libaio 00:32:54.872 direct=1 00:32:54.872 bs=4096 00:32:54.872 iodepth=128 00:32:54.872 norandommap=0 00:32:54.872 numjobs=1 00:32:54.872 00:32:54.872 verify_dump=1 00:32:54.872 verify_backlog=512 00:32:54.872 verify_state_save=0 00:32:54.872 do_verify=1 00:32:54.872 verify=crc32c-intel 00:32:54.872 [job0] 00:32:54.872 filename=/dev/nvme0n1 00:32:54.872 [job1] 00:32:54.872 filename=/dev/nvme0n2 00:32:54.872 [job2] 00:32:54.872 filename=/dev/nvme0n3 00:32:54.872 [job3] 00:32:54.872 filename=/dev/nvme0n4 00:32:54.872 Could not set queue depth (nvme0n1) 00:32:54.872 Could not set queue depth (nvme0n2) 00:32:54.872 Could not set queue depth (nvme0n3) 00:32:54.872 Could not set queue depth (nvme0n4) 00:32:55.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.131 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.131 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.131 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.131 fio-3.35 00:32:55.131 Starting 4 threads 00:32:56.515 00:32:56.515 job0: (groupid=0, jobs=1): err= 0: pid=3240445: Fri Dec 6 15:50:02 2024 00:32:56.515 read: IOPS=4190, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1011msec) 00:32:56.515 slat (nsec): min=1184, max=22419k, avg=109063.98, stdev=864982.26 00:32:56.515 clat (usec): min=3447, max=33891, avg=14587.43, stdev=5004.95 00:32:56.515 lat (usec): min=4501, max=33902, avg=14696.49, stdev=5048.86 00:32:56.515 clat percentiles (usec): 00:32:56.515 | 1.00th=[ 5997], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10683], 00:32:56.515 | 30.00th=[11600], 40.00th=[12911], 50.00th=[13829], 60.00th=[14615], 00:32:56.515 | 70.00th=[15664], 80.00th=[16909], 90.00th=[21365], 95.00th=[24773], 00:32:56.515 | 99.00th=[32113], 99.50th=[32113], 99.90th=[33817], 99.95th=[33817], 00:32:56.515 | 99.99th=[33817] 00:32:56.515 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:32:56.515 slat (usec): min=2, max=21459, avg=108.65, stdev=771.81 00:32:56.515 clat (usec): min=3550, max=41793, avg=14425.57, stdev=5294.39 00:32:56.515 lat (usec): min=3559, max=41818, avg=14534.21, stdev=5351.07 00:32:56.515 clat percentiles (usec): 00:32:56.515 | 1.00th=[ 6063], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:32:56.515 | 30.00th=[10814], 40.00th=[11207], 50.00th=[12780], 60.00th=[13960], 00:32:56.515 | 70.00th=[16581], 80.00th=[20579], 90.00th=[21365], 95.00th=[23725], 00:32:56.515 | 99.00th=[29754], 99.50th=[30540], 99.90th=[30802], 99.95th=[33817], 00:32:56.515 | 99.99th=[41681] 00:32:56.515 bw ( KiB/s): min=16384, max=20480, per=25.30%, avg=18432.00, stdev=2896.31, samples=2 00:32:56.515 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:32:56.515 lat (msec) : 4=0.14%, 10=12.02%, 20=70.41%, 50=17.43% 00:32:56.515 cpu : usr=3.76%, sys=6.04%, ctx=297, majf=0, minf=1 00:32:56.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:56.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.515 issued rwts: total=4237,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.515 job1: (groupid=0, jobs=1): err= 0: pid=3240451: Fri Dec 6 15:50:02 2024 00:32:56.515 read: IOPS=5876, BW=23.0MiB/s (24.1MB/s)(23.0MiB/1004msec) 00:32:56.515 slat (nsec): min=1320, max=10485k, avg=79391.39, stdev=489113.80 00:32:56.515 clat (usec): min=454, max=22289, avg=10457.41, stdev=2134.42 00:32:56.515 lat (usec): min=3264, max=22378, avg=10536.80, stdev=2161.28 00:32:56.515 clat percentiles (usec): 00:32:56.515 | 1.00th=[ 4490], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 9110], 00:32:56.515 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:32:56.515 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12387], 95.00th=[13698], 00:32:56.515 | 99.00th=[19530], 99.50th=[20055], 99.90th=[22152], 99.95th=[22152], 00:32:56.515 | 99.99th=[22414] 00:32:56.515 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:32:56.515 slat (usec): min=2, max=14872, avg=81.32, stdev=547.76 00:32:56.515 clat (usec): min=700, max=30893, avg=10671.75, stdev=2182.65 00:32:56.515 lat (usec): min=1379, max=30924, avg=10753.08, stdev=2223.92 00:32:56.515 clat percentiles (usec): 00:32:56.515 | 1.00th=[ 6128], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9765], 00:32:56.515 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:32:56.515 | 70.00th=[10552], 80.00th=[11076], 90.00th=[13173], 95.00th=[15926], 00:32:56.516 | 99.00th=[19006], 99.50th=[20317], 99.90th=[22152], 99.95th=[22152], 00:32:56.516 | 99.99th=[30802] 00:32:56.516 bw ( KiB/s): min=24576, max=24576, per=33.73%, avg=24576.00, stdev= 0.00, samples=2 00:32:56.516 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:32:56.516 lat (usec) : 500=0.01%, 750=0.01% 00:32:56.516 lat (msec) : 2=0.15%, 4=0.51%, 10=38.83%, 20=59.78%, 50=0.71% 00:32:56.516 cpu : usr=4.99%, sys=6.58%, ctx=444, majf=0, minf=1 00:32:56.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:56.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.516 issued rwts: total=5900,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.516 job2: (groupid=0, jobs=1): err= 0: pid=3240468: Fri Dec 6 15:50:02 2024 00:32:56.516 read: IOPS=2437, BW=9751KiB/s (9985kB/s)(9868KiB/1012msec) 00:32:56.516 slat (nsec): min=1327, max=24381k, avg=256110.38, stdev=1744539.37 00:32:56.516 clat (usec): min=4693, max=68795, avg=31282.24, stdev=15171.67 00:32:56.516 lat (usec): min=4700, max=68821, avg=31538.35, stdev=15264.51 00:32:56.516 clat percentiles (usec): 00:32:56.516 | 1.00th=[ 7832], 5.00th=[11469], 10.00th=[13960], 20.00th=[15664], 00:32:56.516 | 30.00th=[21103], 40.00th=[25560], 50.00th=[29230], 60.00th=[32900], 00:32:56.516 | 70.00th=[39060], 80.00th=[45351], 90.00th=[56886], 95.00th=[58459], 00:32:56.516 | 99.00th=[62653], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:32:56.516 | 99.99th=[68682] 00:32:56.516 write: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec); 0 zone resets 00:32:56.516 slat (usec): min=2, max=20191, avg=135.70, stdev=852.86 00:32:56.516 clat (usec): min=6234, max=61407, avg=19961.24, stdev=8757.43 00:32:56.516 lat (usec): min=6245, max=62714, avg=20096.95, stdev=8814.89 00:32:56.516 clat percentiles (usec): 00:32:56.516 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[11076], 20.00th=[11731], 00:32:56.516 | 30.00th=[12518], 40.00th=[17957], 50.00th=[20055], 60.00th=[21103], 00:32:56.516 | 70.00th=[21365], 80.00th=[24511], 90.00th=[30802], 95.00th=[35390], 00:32:56.516 | 99.00th=[52167], 99.50th=[54789], 99.90th=[56361], 99.95th=[61604], 00:32:56.516 | 99.99th=[61604] 00:32:56.516 bw ( KiB/s): min= 8192, max=12288, per=14.06%, avg=10240.00, stdev=2896.31, samples=2 00:32:56.516 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:32:56.516 lat (msec) : 10=2.82%, 20=36.46%, 50=52.02%, 100=8.69% 00:32:56.516 cpu : usr=1.78%, sys=3.46%, ctx=238, majf=0, minf=1 00:32:56.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:32:56.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.516 issued rwts: total=2467,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.516 job3: (groupid=0, jobs=1): err= 0: pid=3240473: Fri Dec 6 15:50:02 2024 00:32:56.516 read: IOPS=4979, BW=19.4MiB/s (20.4MB/s)(19.6MiB/1010msec) 00:32:56.516 slat (nsec): min=1332, max=11043k, avg=86055.75, stdev=741418.47 00:32:56.516 clat (usec): min=2380, max=36475, avg=12810.81, stdev=4403.23 00:32:56.516 lat (usec): min=2405, max=36477, avg=12896.87, stdev=4443.08 00:32:56.516 clat percentiles (usec): 00:32:56.516 | 1.00th=[ 2704], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[10159], 00:32:56.516 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[12125], 00:32:56.516 | 70.00th=[13829], 80.00th=[15795], 90.00th=[19006], 95.00th=[20579], 00:32:56.516 | 99.00th=[28443], 99.50th=[28705], 99.90th=[36439], 99.95th=[36439], 00:32:56.516 | 99.99th=[36439] 00:32:56.516 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:32:56.516 slat (usec): min=2, max=20785, avg=90.25, stdev=683.04 00:32:56.516 clat (usec): min=1549, max=47477, avg=12443.41, stdev=6130.09 00:32:56.516 lat (usec): min=1594, max=47485, avg=12533.66, stdev=6175.93 00:32:56.516 clat percentiles (usec): 00:32:56.516 | 1.00th=[ 3523], 5.00th=[ 6783], 10.00th=[ 7832], 20.00th=[ 9634], 00:32:56.516 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:32:56.516 | 70.00th=[11600], 80.00th=[12780], 90.00th=[17171], 95.00th=[23200], 00:32:56.516 | 99.00th=[40109], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:32:56.516 | 99.99th=[47449] 00:32:56.516 bw ( KiB/s): min=19176, max=21784, per=28.11%, avg=20480.00, stdev=1844.13, samples=2 00:32:56.516 iops : min= 4794, max= 5446, avg=5120.00, stdev=461.03, samples=2 00:32:56.516 lat (msec) : 2=0.10%, 4=1.09%, 10=19.37%, 20=72.06%, 50=7.38% 00:32:56.516 cpu : usr=3.87%, sys=6.05%, ctx=418, majf=0, minf=1 00:32:56.516 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:56.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:56.516 issued rwts: total=5029,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:56.516 00:32:56.516 Run status group 0 (all jobs): 00:32:56.516 READ: bw=68.1MiB/s (71.4MB/s), 9751KiB/s-23.0MiB/s (9985kB/s-24.1MB/s), io=68.9MiB (72.2MB), run=1004-1012msec 00:32:56.516 WRITE: bw=71.1MiB/s (74.6MB/s), 9.88MiB/s-23.9MiB/s (10.4MB/s-25.1MB/s), io=72.0MiB (75.5MB), run=1004-1012msec 00:32:56.516 00:32:56.516 Disk stats (read/write): 00:32:56.516 nvme0n1: ios=3625/3895, merge=0/0, ticks=36855/41172, in_queue=78027, util=91.98% 00:32:56.516 nvme0n2: ios=5094/5120, merge=0/0, ticks=26102/25747, in_queue=51849, util=98.68% 00:32:56.516 nvme0n3: ios=2068/2359, merge=0/0, ticks=29446/22862, in_queue=52308, util=99.79% 00:32:56.516 nvme0n4: ios=4096/4271, merge=0/0, ticks=47791/49780, in_queue=97571, util=88.03% 00:32:56.516 15:50:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:56.516 [global] 00:32:56.516 thread=1 00:32:56.516 invalidate=1 00:32:56.516 rw=randwrite 00:32:56.516 time_based=1 00:32:56.516 runtime=1 00:32:56.516 ioengine=libaio 00:32:56.516 direct=1 00:32:56.516 bs=4096 00:32:56.516 iodepth=128 00:32:56.516 norandommap=0 00:32:56.516 numjobs=1 00:32:56.516 00:32:56.516 verify_dump=1 00:32:56.516 verify_backlog=512 00:32:56.516 verify_state_save=0 00:32:56.516 do_verify=1 00:32:56.516 verify=crc32c-intel 00:32:56.516 [job0] 00:32:56.516 filename=/dev/nvme0n1 00:32:56.516 [job1] 00:32:56.516 filename=/dev/nvme0n2 00:32:56.516 [job2] 00:32:56.516 filename=/dev/nvme0n3 00:32:56.516 [job3] 00:32:56.516 filename=/dev/nvme0n4 00:32:56.516 Could not set queue depth (nvme0n1) 00:32:56.516 Could not set queue depth (nvme0n2) 00:32:56.516 Could not set queue depth (nvme0n3) 00:32:56.516 Could not set queue depth (nvme0n4) 00:32:56.774 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.774 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.774 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.774 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.774 fio-3.35 00:32:56.774 Starting 4 threads 00:32:58.147 00:32:58.147 job0: (groupid=0, jobs=1): err= 0: pid=3240859: Fri Dec 6 15:50:03 2024 00:32:58.147 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:32:58.147 slat (nsec): min=1326, max=11786k, avg=87951.18, stdev=759467.89 00:32:58.147 clat (usec): min=1113, max=26431, avg=12291.06, stdev=3196.82 00:32:58.147 lat (usec): min=1120, max=34172, avg=12379.02, stdev=3274.11 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 3851], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9503], 00:32:58.147 | 30.00th=[10290], 40.00th=[11207], 50.00th=[12387], 60.00th=[13173], 00:32:58.147 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15795], 95.00th=[17957], 00:32:58.147 | 99.00th=[22676], 99.50th=[24249], 99.90th=[25297], 99.95th=[25297], 00:32:58.147 | 99.99th=[26346] 00:32:58.147 write: IOPS=5392, BW=21.1MiB/s (22.1MB/s)(21.2MiB/1007msec); 0 zone resets 00:32:58.147 slat (usec): min=2, max=16173, avg=87.86, stdev=739.05 00:32:58.147 clat (usec): min=654, max=42255, avg=11920.11, stdev=5941.44 00:32:58.147 lat (usec): min=665, max=42266, avg=12007.98, stdev=5991.00 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 3752], 5.00th=[ 5735], 10.00th=[ 6652], 20.00th=[ 8979], 00:32:58.147 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11338], 00:32:58.147 | 70.00th=[11994], 80.00th=[12911], 90.00th=[18220], 95.00th=[25822], 00:32:58.147 | 99.00th=[35390], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:32:58.147 | 99.99th=[42206] 00:32:58.147 bw ( KiB/s): min=17856, max=24568, per=30.55%, avg=21212.00, stdev=4746.10, samples=2 00:32:58.147 iops : min= 4464, max= 6142, avg=5303.00, stdev=1186.53, samples=2 00:32:58.147 lat (usec) : 750=0.03%, 1000=0.01% 00:32:58.147 lat (msec) : 2=0.46%, 4=0.84%, 10=31.78%, 20=62.01%, 50=4.86% 00:32:58.147 cpu : usr=5.37%, sys=6.06%, ctx=243, majf=0, minf=1 00:32:58.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:58.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.147 issued rwts: total=5120,5430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.147 job1: (groupid=0, jobs=1): err= 0: pid=3240872: Fri Dec 6 15:50:03 2024 00:32:58.147 read: IOPS=3239, BW=12.7MiB/s (13.3MB/s)(13.2MiB/1044msec) 00:32:58.147 slat (nsec): min=1070, max=25005k, avg=156385.88, stdev=1193750.95 00:32:58.147 clat (usec): min=2754, max=82050, avg=21644.70, stdev=17240.03 00:32:58.147 lat (usec): min=2759, max=82075, avg=21801.09, stdev=17368.63 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 2900], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10421], 00:32:58.147 | 30.00th=[10814], 40.00th=[12518], 50.00th=[13960], 60.00th=[15533], 00:32:58.147 | 70.00th=[17171], 80.00th=[38011], 90.00th=[51119], 95.00th=[59507], 00:32:58.147 | 99.00th=[68682], 99.50th=[69731], 99.90th=[72877], 99.95th=[77071], 00:32:58.147 | 99.99th=[82314] 00:32:58.147 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1044msec); 0 zone resets 00:32:58.147 slat (nsec): min=1806, max=21395k, avg=119973.49, stdev=1024065.06 00:32:58.147 clat (usec): min=1910, max=64030, avg=16485.93, stdev=12806.72 00:32:58.147 lat (usec): min=1929, max=64057, avg=16605.90, stdev=12914.41 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 4228], 5.00th=[ 7308], 10.00th=[ 8455], 20.00th=[ 9765], 00:32:58.147 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:32:58.147 | 70.00th=[13173], 80.00th=[22938], 90.00th=[43779], 95.00th=[45876], 00:32:58.147 | 99.00th=[56886], 99.50th=[56886], 99.90th=[59507], 99.95th=[63701], 00:32:58.147 | 99.99th=[64226] 00:32:58.147 bw ( KiB/s): min=12264, max=16408, per=20.65%, avg=14336.00, stdev=2930.25, samples=2 00:32:58.147 iops : min= 3066, max= 4102, avg=3584.00, stdev=732.56, samples=2 00:32:58.147 lat (msec) : 2=0.06%, 4=1.69%, 10=14.87%, 20=60.02%, 50=16.31% 00:32:58.147 lat (msec) : 100=7.05% 00:32:58.147 cpu : usr=2.01%, sys=4.03%, ctx=260, majf=0, minf=1 00:32:58.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:58.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.147 issued rwts: total=3382,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.147 job2: (groupid=0, jobs=1): err= 0: pid=3240886: Fri Dec 6 15:50:03 2024 00:32:58.147 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:32:58.147 slat (nsec): min=1371, max=15876k, avg=101614.80, stdev=643034.66 00:32:58.147 clat (usec): min=4636, max=56140, avg=13448.89, stdev=6720.09 00:32:58.147 lat (usec): min=4653, max=56150, avg=13550.50, stdev=6756.05 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10552], 00:32:58.147 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11863], 60.00th=[12256], 00:32:58.147 | 70.00th=[12780], 80.00th=[13173], 90.00th=[15270], 95.00th=[23725], 00:32:58.147 | 99.00th=[51643], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:32:58.147 | 99.99th=[56361] 00:32:58.147 write: IOPS=4995, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1003msec); 0 zone resets 00:32:58.147 slat (nsec): min=1927, max=21102k, avg=100019.44, stdev=622441.25 00:32:58.147 clat (usec): min=2560, max=31867, avg=12693.70, stdev=4219.65 00:32:58.147 lat (usec): min=3197, max=32541, avg=12793.72, stdev=4255.75 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 5407], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[11207], 00:32:58.147 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:32:58.147 | 70.00th=[12387], 80.00th=[13304], 90.00th=[15139], 95.00th=[23200], 00:32:58.147 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327], 00:32:58.147 | 99.99th=[31851] 00:32:58.147 bw ( KiB/s): min=17616, max=21448, per=28.13%, avg=19532.00, stdev=2709.63, samples=2 00:32:58.147 iops : min= 4404, max= 5362, avg=4883.00, stdev=677.41, samples=2 00:32:58.147 lat (msec) : 4=0.18%, 10=9.91%, 20=83.25%, 50=6.11%, 100=0.55% 00:32:58.147 cpu : usr=3.89%, sys=6.19%, ctx=463, majf=0, minf=1 00:32:58.147 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:58.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.147 issued rwts: total=4608,5010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.147 job3: (groupid=0, jobs=1): err= 0: pid=3240891: Fri Dec 6 15:50:03 2024 00:32:58.147 read: IOPS=3907, BW=15.3MiB/s (16.0MB/s)(15.9MiB/1042msec) 00:32:58.147 slat (nsec): min=1187, max=23647k, avg=118342.63, stdev=1015905.43 00:32:58.147 clat (usec): min=4304, max=68435, avg=17677.72, stdev=11733.24 00:32:58.147 lat (usec): min=4309, max=68441, avg=17796.06, stdev=11797.96 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 5342], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10290], 00:32:58.147 | 30.00th=[10945], 40.00th=[12125], 50.00th=[13173], 60.00th=[14484], 00:32:58.147 | 70.00th=[15795], 80.00th=[22152], 90.00th=[40109], 95.00th=[46400], 00:32:58.147 | 99.00th=[56361], 99.50th=[56361], 99.90th=[68682], 99.95th=[68682], 00:32:58.147 | 99.99th=[68682] 00:32:58.147 write: IOPS=3930, BW=15.4MiB/s (16.1MB/s)(16.0MiB/1042msec); 0 zone resets 00:32:58.147 slat (usec): min=2, max=17587, avg=107.89, stdev=860.36 00:32:58.147 clat (usec): min=539, max=45548, avg=14695.91, stdev=7418.59 00:32:58.147 lat (usec): min=867, max=45575, avg=14803.80, stdev=7494.46 00:32:58.147 clat percentiles (usec): 00:32:58.147 | 1.00th=[ 3818], 5.00th=[ 7046], 10.00th=[ 9241], 20.00th=[10683], 00:32:58.147 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12649], 00:32:58.147 | 70.00th=[14615], 80.00th=[17695], 90.00th=[27919], 95.00th=[31327], 00:32:58.147 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[41681], 00:32:58.147 | 99.99th=[45351] 00:32:58.148 bw ( KiB/s): min=16384, max=16384, per=23.60%, avg=16384.00, stdev= 0.00, samples=2 00:32:58.148 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:32:58.148 lat (usec) : 750=0.01%, 1000=0.05% 00:32:58.148 lat (msec) : 2=0.05%, 4=0.56%, 10=14.74%, 20=64.63%, 50=18.67% 00:32:58.148 lat (msec) : 100=1.29% 00:32:58.148 cpu : usr=3.75%, sys=4.42%, ctx=294, majf=0, minf=1 00:32:58.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:58.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.148 issued rwts: total=4072,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.148 00:32:58.148 Run status group 0 (all jobs): 00:32:58.148 READ: bw=64.3MiB/s (67.4MB/s), 12.7MiB/s-19.9MiB/s (13.3MB/s-20.8MB/s), io=67.1MiB (70.4MB), run=1003-1044msec 00:32:58.148 WRITE: bw=67.8MiB/s (71.1MB/s), 13.4MiB/s-21.1MiB/s (14.1MB/s-22.1MB/s), io=70.8MiB (74.2MB), run=1003-1044msec 00:32:58.148 00:32:58.148 Disk stats (read/write): 00:32:58.148 nvme0n1: ios=4625/4669, merge=0/0, ticks=54477/45716, in_queue=100193, util=89.18% 00:32:58.148 nvme0n2: ios=3122/3095, merge=0/0, ticks=23973/21340, in_queue=45313, util=94.11% 00:32:58.148 nvme0n3: ios=3962/4096, merge=0/0, ticks=21010/19185, in_queue=40195, util=97.61% 00:32:58.148 nvme0n4: ios=3117/3487, merge=0/0, ticks=36417/34821, in_queue=71238, util=94.97% 00:32:58.148 15:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:58.148 15:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3241029 00:32:58.148 15:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:58.148 15:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:58.148 [global] 00:32:58.148 thread=1 00:32:58.148 invalidate=1 00:32:58.148 rw=read 00:32:58.148 time_based=1 00:32:58.148 runtime=10 00:32:58.148 ioengine=libaio 00:32:58.148 direct=1 00:32:58.148 bs=4096 00:32:58.148 iodepth=1 00:32:58.148 norandommap=1 00:32:58.148 numjobs=1 00:32:58.148 00:32:58.148 [job0] 00:32:58.148 filename=/dev/nvme0n1 00:32:58.148 [job1] 00:32:58.148 filename=/dev/nvme0n2 00:32:58.148 [job2] 00:32:58.148 filename=/dev/nvme0n3 00:32:58.148 [job3] 00:32:58.148 filename=/dev/nvme0n4 00:32:58.148 Could not set queue depth (nvme0n1) 00:32:58.148 Could not set queue depth (nvme0n2) 00:32:58.148 Could not set queue depth (nvme0n3) 00:32:58.148 Could not set queue depth (nvme0n4) 00:32:58.405 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.405 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.405 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.405 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.405 fio-3.35 00:32:58.405 Starting 4 threads 00:33:00.933 15:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:01.192 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:01.192 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2117632, buflen=4096 00:33:01.192 fio: pid=3241366, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.451 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.451 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:01.451 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=315392, buflen=4096 00:33:01.451 fio: pid=3241356, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.711 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.711 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:01.711 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2412544, buflen=4096 00:33:01.711 fio: pid=3241307, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.711 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=63385600, buflen=4096 00:33:01.711 fio: pid=3241328, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:01.711 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.711 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:01.971 00:33:01.971 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3241307: Fri Dec 6 15:50:07 2024 00:33:01.971 read: IOPS=186, BW=745KiB/s (763kB/s)(2356KiB/3161msec) 00:33:01.971 slat (usec): min=6, max=26837, avg=54.73, stdev=1104.52 00:33:01.971 clat (usec): min=183, max=43683, avg=5273.90, stdev=13447.83 00:33:01.971 lat (usec): min=190, max=43706, avg=5328.68, stdev=13480.84 00:33:01.971 clat percentiles (usec): 00:33:01.971 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 200], 00:33:01.971 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:33:01.971 | 70.00th=[ 225], 80.00th=[ 243], 90.00th=[41157], 95.00th=[41157], 00:33:01.971 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:33:01.971 | 99.99th=[43779] 00:33:01.971 bw ( KiB/s): min= 96, max= 3761, per=3.57%, avg=713.50, stdev=1492.97, samples=6 00:33:01.971 iops : min= 24, max= 940, avg=178.33, stdev=373.14, samples=6 00:33:01.971 lat (usec) : 250=82.88%, 500=4.41% 00:33:01.971 lat (msec) : 4=0.17%, 50=12.37% 00:33:01.971 cpu : usr=0.03%, sys=0.22%, ctx=592, majf=0, minf=1 00:33:01.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.971 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.971 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.971 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3241328: Fri Dec 6 15:50:07 2024 00:33:01.971 read: IOPS=4633, BW=18.1MiB/s (19.0MB/s)(60.4MiB/3340msec) 00:33:01.971 slat (usec): min=6, max=31574, avg=13.08, stdev=328.71 00:33:01.971 clat (usec): min=161, max=1654, avg=199.02, stdev=29.98 00:33:01.971 lat (usec): min=181, max=31930, avg=212.10, stdev=333.09 00:33:01.971 clat percentiles (usec): 00:33:01.971 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 186], 00:33:01.971 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:33:01.971 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 245], 95.00th=[ 249], 00:33:01.971 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 363], 99.95th=[ 445], 00:33:01.971 | 99.99th=[ 1614] 00:33:01.971 bw ( KiB/s): min=14661, max=20368, per=95.45%, avg=19042.17, stdev=2163.12, samples=6 00:33:01.971 iops : min= 3665, max= 5092, avg=4760.50, stdev=540.88, samples=6 00:33:01.971 lat (usec) : 250=95.89%, 500=4.08%, 1000=0.01% 00:33:01.971 lat (msec) : 2=0.02% 00:33:01.971 cpu : usr=2.58%, sys=7.28%, ctx=15483, majf=0, minf=1 00:33:01.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.971 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.971 issued rwts: total=15476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.971 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3241356: Fri Dec 6 15:50:07 2024 00:33:01.971 read: IOPS=26, BW=105KiB/s (107kB/s)(308KiB/2935msec) 00:33:01.971 slat (nsec): min=10035, max=37050, avg=23267.68, stdev=4058.93 00:33:01.971 clat (usec): min=339, max=42014, avg=37808.22, stdev=10957.98 00:33:01.971 lat (usec): min=362, max=42039, avg=37831.46, stdev=10957.35 00:33:01.971 clat percentiles (usec): 00:33:01.971 | 1.00th=[ 338], 5.00th=[ 359], 10.00th=[40633], 20.00th=[41157], 00:33:01.971 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:01.971 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:01.971 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:01.971 | 99.99th=[42206] 00:33:01.971 bw ( KiB/s): min= 96, max= 120, per=0.54%, avg=107.20, stdev= 9.12, samples=5 00:33:01.971 iops : min= 24, max= 30, avg=26.80, stdev= 2.28, samples=5 00:33:01.971 lat (usec) : 500=7.69% 00:33:01.971 lat (msec) : 50=91.03% 00:33:01.972 cpu : usr=0.14%, sys=0.00%, ctx=78, majf=0, minf=2 00:33:01.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.972 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.972 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.972 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3241366: Fri Dec 6 15:50:07 2024 00:33:01.972 read: IOPS=192, BW=767KiB/s (785kB/s)(2068KiB/2696msec) 00:33:01.972 slat (nsec): min=7005, max=38059, avg=10306.24, stdev=4983.53 00:33:01.972 clat (usec): min=188, max=45972, avg=5160.20, stdev=13265.10 00:33:01.972 lat (usec): min=196, max=45996, avg=5170.49, stdev=13269.31 00:33:01.972 clat percentiles (usec): 00:33:01.972 | 1.00th=[ 190], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 241], 00:33:01.972 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:33:01.972 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[41157], 95.00th=[41157], 00:33:01.972 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:33:01.972 | 99.99th=[45876] 00:33:01.972 bw ( KiB/s): min= 96, max= 1288, per=1.69%, avg=337.60, stdev=531.33, samples=5 00:33:01.972 iops : min= 24, max= 322, avg=84.40, stdev=132.83, samples=5 00:33:01.972 lat (usec) : 250=59.07%, 500=27.22%, 750=0.19% 00:33:01.972 lat (msec) : 2=1.16%, 4=0.19%, 50=11.97% 00:33:01.972 cpu : usr=0.00%, sys=0.48%, ctx=518, majf=0, minf=1 00:33:01.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.972 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.972 issued rwts: total=518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:01.972 00:33:01.972 Run status group 0 (all jobs): 00:33:01.972 READ: bw=19.5MiB/s (20.4MB/s), 105KiB/s-18.1MiB/s (107kB/s-19.0MB/s), io=65.1MiB (68.2MB), run=2696-3340msec 00:33:01.972 00:33:01.972 Disk stats (read/write): 00:33:01.972 nvme0n1: ios=603/0, merge=0/0, ticks=3078/0, in_queue=3078, util=94.51% 00:33:01.972 nvme0n2: ios=15484/0, merge=0/0, ticks=3897/0, in_queue=3897, util=97.12% 00:33:01.972 nvme0n3: ios=74/0, merge=0/0, ticks=2791/0, in_queue=2791, util=96.20% 00:33:01.972 nvme0n4: ios=418/0, merge=0/0, ticks=2553/0, in_queue=2553, util=96.38% 00:33:01.972 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.972 15:50:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:02.231 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.231 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:02.490 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.490 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:02.749 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.749 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:02.749 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:02.749 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3241029 00:33:02.749 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:02.749 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:03.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:03.009 nvmf hotplug test: fio failed as expected 00:33:03.009 15:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:03.267 rmmod nvme_tcp 00:33:03.267 rmmod nvme_fabrics 00:33:03.267 rmmod nvme_keyring 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3238489 ']' 00:33:03.267 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3238489 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3238489 ']' 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3238489 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238489 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238489' 00:33:03.268 killing process with pid 3238489 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3238489 00:33:03.268 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3238489 00:33:03.526 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:03.526 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:03.526 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:03.527 15:50:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:06.063 00:33:06.063 real 0m26.448s 00:33:06.063 user 1m31.370s 00:33:06.063 sys 0m11.324s 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:06.063 ************************************ 00:33:06.063 END TEST nvmf_fio_target 00:33:06.063 ************************************ 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:06.063 ************************************ 00:33:06.063 START TEST nvmf_bdevio 00:33:06.063 ************************************ 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:06.063 * Looking for test storage... 00:33:06.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.063 --rc genhtml_branch_coverage=1 00:33:06.063 --rc genhtml_function_coverage=1 00:33:06.063 --rc genhtml_legend=1 00:33:06.063 --rc geninfo_all_blocks=1 00:33:06.063 --rc geninfo_unexecuted_blocks=1 00:33:06.063 00:33:06.063 ' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.063 --rc genhtml_branch_coverage=1 00:33:06.063 --rc genhtml_function_coverage=1 00:33:06.063 --rc genhtml_legend=1 00:33:06.063 --rc geninfo_all_blocks=1 00:33:06.063 --rc geninfo_unexecuted_blocks=1 00:33:06.063 00:33:06.063 ' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.063 --rc genhtml_branch_coverage=1 00:33:06.063 --rc genhtml_function_coverage=1 00:33:06.063 --rc genhtml_legend=1 00:33:06.063 --rc geninfo_all_blocks=1 00:33:06.063 --rc geninfo_unexecuted_blocks=1 00:33:06.063 00:33:06.063 ' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:06.063 --rc genhtml_branch_coverage=1 00:33:06.063 --rc genhtml_function_coverage=1 00:33:06.063 --rc genhtml_legend=1 00:33:06.063 --rc geninfo_all_blocks=1 00:33:06.063 --rc geninfo_unexecuted_blocks=1 00:33:06.063 00:33:06.063 ' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:06.063 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:06.064 15:50:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:12.634 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:12.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:12.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:12.635 Found net devices under 0000:86:00.0: cvl_0_0 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:12.635 Found net devices under 0000:86:00.1: cvl_0_1 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.635 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:33:12.636 00:33:12.636 --- 10.0.0.2 ping statistics --- 00:33:12.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.636 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:33:12.636 00:33:12.636 --- 10.0.0.1 ping statistics --- 00:33:12.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.636 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3245626 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3245626 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3245626 ']' 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.636 [2024-12-06 15:50:17.725285] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:12.636 [2024-12-06 15:50:17.726187] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:33:12.636 [2024-12-06 15:50:17.726220] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.636 [2024-12-06 15:50:17.801620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:12.636 [2024-12-06 15:50:17.842848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.636 [2024-12-06 15:50:17.842884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.636 [2024-12-06 15:50:17.842891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.636 [2024-12-06 15:50:17.842897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.636 [2024-12-06 15:50:17.842903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.636 [2024-12-06 15:50:17.844388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:12.636 [2024-12-06 15:50:17.844462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:12.636 [2024-12-06 15:50:17.844572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:12.636 [2024-12-06 15:50:17.844571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:12.636 [2024-12-06 15:50:17.911503] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:12.636 [2024-12-06 15:50:17.912308] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:12.636 [2024-12-06 15:50:17.912443] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:12.636 [2024-12-06 15:50:17.912628] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:12.636 [2024-12-06 15:50:17.912681] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.636 15:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.636 [2024-12-06 15:50:17.985411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.636 Malloc0 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.636 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.637 [2024-12-06 15:50:18.065676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:12.637 { 00:33:12.637 "params": { 00:33:12.637 "name": "Nvme$subsystem", 00:33:12.637 "trtype": "$TEST_TRANSPORT", 00:33:12.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:12.637 "adrfam": "ipv4", 00:33:12.637 "trsvcid": "$NVMF_PORT", 00:33:12.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:12.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:12.637 "hdgst": ${hdgst:-false}, 00:33:12.637 "ddgst": ${ddgst:-false} 00:33:12.637 }, 00:33:12.637 "method": "bdev_nvme_attach_controller" 00:33:12.637 } 00:33:12.637 EOF 00:33:12.637 )") 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:12.637 15:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:12.637 "params": { 00:33:12.637 "name": "Nvme1", 00:33:12.637 "trtype": "tcp", 00:33:12.637 "traddr": "10.0.0.2", 00:33:12.637 "adrfam": "ipv4", 00:33:12.637 "trsvcid": "4420", 00:33:12.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:12.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:12.637 "hdgst": false, 00:33:12.637 "ddgst": false 00:33:12.637 }, 00:33:12.637 "method": "bdev_nvme_attach_controller" 00:33:12.637 }' 00:33:12.637 [2024-12-06 15:50:18.118323] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:33:12.637 [2024-12-06 15:50:18.118366] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245653 ] 00:33:12.637 [2024-12-06 15:50:18.193983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:12.637 [2024-12-06 15:50:18.237341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.637 [2024-12-06 15:50:18.237450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.637 [2024-12-06 15:50:18.237451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:12.637 I/O targets: 00:33:12.637 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:12.637 00:33:12.637 00:33:12.637 CUnit - A unit testing framework for C - Version 2.1-3 00:33:12.637 http://cunit.sourceforge.net/ 00:33:12.637 00:33:12.637 00:33:12.637 Suite: bdevio tests on: Nvme1n1 00:33:12.637 Test: blockdev write read block ...passed 00:33:12.895 Test: blockdev write zeroes read block ...passed 00:33:12.895 Test: blockdev write zeroes read no split ...passed 00:33:12.895 Test: blockdev write zeroes read split ...passed 00:33:12.895 Test: blockdev write zeroes read split partial ...passed 00:33:12.895 Test: blockdev reset ...[2024-12-06 15:50:18.700895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:12.895 [2024-12-06 15:50:18.700958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x904f30 (9): Bad file descriptor 00:33:12.895 [2024-12-06 15:50:18.712146] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:12.895 passed 00:33:12.895 Test: blockdev write read 8 blocks ...passed 00:33:12.895 Test: blockdev write read size > 128k ...passed 00:33:12.895 Test: blockdev write read invalid size ...passed 00:33:12.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:12.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:12.895 Test: blockdev write read max offset ...passed 00:33:13.153 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:13.153 Test: blockdev writev readv 8 blocks ...passed 00:33:13.153 Test: blockdev writev readv 30 x 1block ...passed 00:33:13.153 Test: blockdev writev readv block ...passed 00:33:13.153 Test: blockdev writev readv size > 128k ...passed 00:33:13.153 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:13.153 Test: blockdev comparev and writev ...[2024-12-06 15:50:19.004353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.004387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.004401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.004413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.004695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.004705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.004716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.004723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.005006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.005016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.005027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.005035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.005319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.005330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.005343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:13.153 [2024-12-06 15:50:19.005351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:13.153 passed 00:33:13.153 Test: blockdev nvme passthru rw ...passed 00:33:13.153 Test: blockdev nvme passthru vendor specific ...[2024-12-06 15:50:19.087719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.153 [2024-12-06 15:50:19.087734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.087850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.153 [2024-12-06 15:50:19.087859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:13.153 [2024-12-06 15:50:19.087964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.153 [2024-12-06 15:50:19.087973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:13.154 [2024-12-06 15:50:19.088087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:13.154 [2024-12-06 15:50:19.088095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:13.154 passed 00:33:13.154 Test: blockdev nvme admin passthru ...passed 00:33:13.154 Test: blockdev copy ...passed 00:33:13.154 00:33:13.154 Run Summary: Type Total Ran Passed Failed Inactive 00:33:13.154 suites 1 1 n/a 0 0 00:33:13.154 tests 23 23 23 0 0 00:33:13.154 asserts 152 152 152 0 n/a 00:33:13.154 00:33:13.154 Elapsed time = 1.187 seconds 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.412 rmmod nvme_tcp 00:33:13.412 rmmod nvme_fabrics 00:33:13.412 rmmod nvme_keyring 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3245626 ']' 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3245626 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3245626 ']' 00:33:13.412 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3245626 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3245626 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3245626' 00:33:13.413 killing process with pid 3245626 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3245626 00:33:13.413 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3245626 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.672 15:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.212 15:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.212 00:33:16.212 real 0m10.113s 00:33:16.212 user 0m9.710s 00:33:16.212 sys 0m5.244s 00:33:16.212 15:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.212 15:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:16.212 ************************************ 00:33:16.212 END TEST nvmf_bdevio 00:33:16.212 ************************************ 00:33:16.212 15:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:16.212 00:33:16.212 real 4m33.859s 00:33:16.212 user 9m12.761s 00:33:16.212 sys 1m53.364s 00:33:16.212 15:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.212 15:50:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:16.212 ************************************ 00:33:16.212 END TEST nvmf_target_core_interrupt_mode 00:33:16.212 ************************************ 00:33:16.212 15:50:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:16.212 15:50:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:16.212 15:50:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.212 15:50:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.212 ************************************ 00:33:16.212 START TEST nvmf_interrupt 00:33:16.212 ************************************ 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:16.212 * Looking for test storage... 00:33:16.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:16.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.212 --rc genhtml_branch_coverage=1 00:33:16.212 --rc genhtml_function_coverage=1 00:33:16.212 --rc genhtml_legend=1 00:33:16.212 --rc geninfo_all_blocks=1 00:33:16.212 --rc geninfo_unexecuted_blocks=1 00:33:16.212 00:33:16.212 ' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:16.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.212 --rc genhtml_branch_coverage=1 00:33:16.212 --rc genhtml_function_coverage=1 00:33:16.212 --rc genhtml_legend=1 00:33:16.212 --rc geninfo_all_blocks=1 00:33:16.212 --rc geninfo_unexecuted_blocks=1 00:33:16.212 00:33:16.212 ' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:16.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.212 --rc genhtml_branch_coverage=1 00:33:16.212 --rc genhtml_function_coverage=1 00:33:16.212 --rc genhtml_legend=1 00:33:16.212 --rc geninfo_all_blocks=1 00:33:16.212 --rc geninfo_unexecuted_blocks=1 00:33:16.212 00:33:16.212 ' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:16.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.212 --rc genhtml_branch_coverage=1 00:33:16.212 --rc genhtml_function_coverage=1 00:33:16.212 --rc genhtml_legend=1 00:33:16.212 --rc geninfo_all_blocks=1 00:33:16.212 --rc geninfo_unexecuted_blocks=1 00:33:16.212 00:33:16.212 ' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.212 15:50:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:16.213 15:50:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:21.590 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:21.590 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:21.590 Found net devices under 0000:86:00.0: cvl_0_0 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:21.590 Found net devices under 0000:86:00.1: cvl_0_1 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:21.590 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.591 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:21.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:33:21.885 00:33:21.885 --- 10.0.0.2 ping statistics --- 00:33:21.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.885 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:33:21.885 00:33:21.885 --- 10.0.0.1 ping statistics --- 00:33:21.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.885 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3249423 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3249423 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3249423 ']' 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.885 15:50:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.145 [2024-12-06 15:50:27.915548] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:22.145 [2024-12-06 15:50:27.916518] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:33:22.145 [2024-12-06 15:50:27.916556] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:22.145 [2024-12-06 15:50:27.992644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:22.145 [2024-12-06 15:50:28.031473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.145 [2024-12-06 15:50:28.031510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.145 [2024-12-06 15:50:28.031517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.145 [2024-12-06 15:50:28.031523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.145 [2024-12-06 15:50:28.031529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.145 [2024-12-06 15:50:28.032766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.145 [2024-12-06 15:50:28.032766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.145 [2024-12-06 15:50:28.101375] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:22.145 [2024-12-06 15:50:28.101934] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:22.145 [2024-12-06 15:50:28.102144] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:22.145 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.145 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:22.145 15:50:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:22.145 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.145 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:22.405 5000+0 records in 00:33:22.405 5000+0 records out 00:33:22.405 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0171291 s, 598 MB/s 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 AIO0 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 [2024-12-06 15:50:28.241457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.405 [2024-12-06 15:50:28.281856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3249423 0 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3249423 0 idle 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:22.405 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.664 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249423 root 20 0 128.2g 46080 33792 R 0.0 0.0 0:00.25 reactor_0' 00:33:22.664 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249423 root 20 0 128.2g 46080 33792 R 0.0 0.0 0:00.25 reactor_0 00:33:22.664 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.664 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.664 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.664 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.664 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3249423 1 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3249423 1 idle 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249430 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249430 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3249473 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.665 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3249423 0 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3249423 0 busy 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249423 root 20 0 128.2g 46848 33792 R 73.3 0.0 0:00.36 reactor_0' 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249423 root 20 0 128.2g 46848 33792 R 73.3 0.0 0:00.36 reactor_0 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=73.3 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=73 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3249423 1 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3249423 1 busy 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:22.924 15:50:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249430 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1' 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249430 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:23.182 15:50:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3249473 00:33:33.152 Initializing NVMe Controllers 00:33:33.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.152 Controller IO queue size 256, less than required. 00:33:33.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:33.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:33.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:33.152 Initialization complete. Launching workers. 00:33:33.152 ======================================================== 00:33:33.153 Latency(us) 00:33:33.153 Device Information : IOPS MiB/s Average min max 00:33:33.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16822.10 65.71 15225.47 2840.04 30188.35 00:33:33.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17008.70 66.44 15055.16 6942.72 28033.88 00:33:33.153 ======================================================== 00:33:33.153 Total : 33830.80 132.15 15139.84 2840.04 30188.35 00:33:33.153 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3249423 0 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3249423 0 idle 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249423 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.24 reactor_0' 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249423 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.24 reactor_0 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.153 15:50:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3249423 1 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3249423 1 idle 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:33.153 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249430 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249430 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.413 15:50:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:33.673 15:50:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:33.673 15:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:33.673 15:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:33.673 15:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:33.673 15:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3249423 0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3249423 0 idle 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249423 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.51 reactor_0' 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249423 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.51 reactor_0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3249423 1 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3249423 1 idle 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3249423 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:36.207 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3249423 -w 256 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3249430 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.11 reactor_1' 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3249430 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.11 reactor_1 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:36.208 15:50:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:36.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.208 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.208 rmmod nvme_tcp 00:33:36.208 rmmod nvme_fabrics 00:33:36.466 rmmod nvme_keyring 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3249423 ']' 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3249423 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3249423 ']' 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3249423 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3249423 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3249423' 00:33:36.466 killing process with pid 3249423 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3249423 00:33:36.466 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3249423 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.724 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:36.725 15:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.637 15:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.637 00:33:38.637 real 0m22.806s 00:33:38.637 user 0m39.495s 00:33:38.637 sys 0m8.561s 00:33:38.637 15:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.637 15:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:38.637 ************************************ 00:33:38.637 END TEST nvmf_interrupt 00:33:38.637 ************************************ 00:33:38.637 00:33:38.637 real 27m25.931s 00:33:38.637 user 56m33.893s 00:33:38.637 sys 9m26.660s 00:33:38.637 15:50:44 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.637 15:50:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.637 ************************************ 00:33:38.637 END TEST nvmf_tcp 00:33:38.637 ************************************ 00:33:38.895 15:50:44 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:38.895 15:50:44 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.895 15:50:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:38.895 15:50:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.895 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:33:38.895 ************************************ 00:33:38.895 START TEST spdkcli_nvmf_tcp 00:33:38.895 ************************************ 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.895 * Looking for test storage... 00:33:38.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.895 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.896 --rc genhtml_branch_coverage=1 00:33:38.896 --rc genhtml_function_coverage=1 00:33:38.896 --rc genhtml_legend=1 00:33:38.896 --rc geninfo_all_blocks=1 00:33:38.896 --rc geninfo_unexecuted_blocks=1 00:33:38.896 00:33:38.896 ' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.896 --rc genhtml_branch_coverage=1 00:33:38.896 --rc genhtml_function_coverage=1 00:33:38.896 --rc genhtml_legend=1 00:33:38.896 --rc geninfo_all_blocks=1 00:33:38.896 --rc geninfo_unexecuted_blocks=1 00:33:38.896 00:33:38.896 ' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.896 --rc genhtml_branch_coverage=1 00:33:38.896 --rc genhtml_function_coverage=1 00:33:38.896 --rc genhtml_legend=1 00:33:38.896 --rc geninfo_all_blocks=1 00:33:38.896 --rc geninfo_unexecuted_blocks=1 00:33:38.896 00:33:38.896 ' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:38.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.896 --rc genhtml_branch_coverage=1 00:33:38.896 --rc genhtml_function_coverage=1 00:33:38.896 --rc genhtml_legend=1 00:33:38.896 --rc geninfo_all_blocks=1 00:33:38.896 --rc geninfo_unexecuted_blocks=1 00:33:38.896 00:33:38.896 ' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3252239 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3252239 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3252239 ']' 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.896 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.897 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.897 15:50:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.154 [2024-12-06 15:50:44.934252] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:33:39.154 [2024-12-06 15:50:44.934301] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3252239 ] 00:33:39.154 [2024-12-06 15:50:45.006203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:39.154 [2024-12-06 15:50:45.049349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.154 [2024-12-06 15:50:45.049353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.154 15:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.154 15:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:39.154 15:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:39.154 15:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:39.155 15:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.412 15:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:39.412 15:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:39.412 15:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:39.412 15:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.412 15:50:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.412 15:50:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:39.412 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:39.412 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:39.412 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:39.412 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:39.412 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:39.412 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:39.412 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.412 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.412 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:39.412 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:39.412 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:39.412 ' 00:33:41.936 [2024-12-06 15:50:47.872000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.433 [2024-12-06 15:50:49.212507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:45.979 [2024-12-06 15:50:51.700282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:48.507 [2024-12-06 15:50:53.879030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:49.894 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:49.894 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:49.894 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:49.894 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:49.894 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:49.894 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:49.894 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:49.894 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.894 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.894 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:49.894 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:49.894 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:49.894 15:50:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:50.151 15:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:50.409 15:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:50.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:50.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:50.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:50.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:50.409 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:50.409 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:50.409 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:50.409 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:50.409 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:50.409 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:50.409 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:50.409 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:50.409 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:50.409 ' 00:33:55.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:55.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:55.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:55.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:55.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:55.674 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:55.674 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:55.674 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:55.674 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:55.674 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:55.674 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:55.674 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:55.674 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:55.674 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3252239 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3252239 ']' 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3252239 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3252239 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3252239' 00:33:55.933 killing process with pid 3252239 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3252239 00:33:55.933 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3252239 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3252239 ']' 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3252239 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3252239 ']' 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3252239 00:33:56.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3252239) - No such process 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3252239 is not found' 00:33:56.193 Process with pid 3252239 is not found 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:56.193 00:33:56.193 real 0m17.325s 00:33:56.193 user 0m38.223s 00:33:56.193 sys 0m0.783s 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.193 15:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 ************************************ 00:33:56.193 END TEST spdkcli_nvmf_tcp 00:33:56.193 ************************************ 00:33:56.193 15:51:02 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:56.193 15:51:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:56.193 15:51:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.193 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:33:56.193 ************************************ 00:33:56.193 START TEST nvmf_identify_passthru 00:33:56.193 ************************************ 00:33:56.193 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:56.193 * Looking for test storage... 00:33:56.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.193 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:56.193 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:56.193 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:56.453 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:56.453 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.453 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:56.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.453 --rc genhtml_branch_coverage=1 00:33:56.453 --rc genhtml_function_coverage=1 00:33:56.453 --rc genhtml_legend=1 00:33:56.453 --rc geninfo_all_blocks=1 00:33:56.453 --rc geninfo_unexecuted_blocks=1 00:33:56.453 00:33:56.453 ' 00:33:56.453 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:56.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.453 --rc genhtml_branch_coverage=1 00:33:56.453 --rc genhtml_function_coverage=1 00:33:56.453 --rc genhtml_legend=1 00:33:56.453 --rc geninfo_all_blocks=1 00:33:56.453 --rc geninfo_unexecuted_blocks=1 00:33:56.453 00:33:56.453 ' 00:33:56.453 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:56.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.453 --rc genhtml_branch_coverage=1 00:33:56.453 --rc genhtml_function_coverage=1 00:33:56.453 --rc genhtml_legend=1 00:33:56.453 --rc geninfo_all_blocks=1 00:33:56.453 --rc geninfo_unexecuted_blocks=1 00:33:56.453 00:33:56.453 ' 00:33:56.453 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:56.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.453 --rc genhtml_branch_coverage=1 00:33:56.453 --rc genhtml_function_coverage=1 00:33:56.453 --rc genhtml_legend=1 00:33:56.453 --rc geninfo_all_blocks=1 00:33:56.453 --rc geninfo_unexecuted_blocks=1 00:33:56.453 00:33:56.453 ' 00:33:56.453 15:51:02 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.453 15:51:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.453 15:51:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.453 15:51:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.453 15:51:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.453 15:51:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:56.453 15:51:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:56.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.453 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.453 15:51:02 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.454 15:51:02 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.454 15:51:02 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.454 15:51:02 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.454 15:51:02 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.454 15:51:02 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.454 15:51:02 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.454 15:51:02 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.454 15:51:02 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:56.454 15:51:02 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.454 15:51:02 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.454 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:56.454 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.454 15:51:02 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.454 15:51:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:03.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:03.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:03.026 Found net devices under 0000:86:00.0: cvl_0_0 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:03.026 Found net devices under 0000:86:00.1: cvl_0_1 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:03.026 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:03.027 15:51:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:03.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:34:03.027 00:34:03.027 --- 10.0.0.2 ping statistics --- 00:34:03.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.027 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:03.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:34:03.027 00:34:03.027 --- 10.0.0.1 ping statistics --- 00:34:03.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.027 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:03.027 15:51:08 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:03.027 15:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.027 15:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:03.027 15:51:08 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:03.027 15:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:03.027 15:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:03.027 15:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:03.027 15:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:03.027 15:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:07.211 15:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:34:07.211 15:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:07.211 15:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:07.211 15:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:12.476 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:12.476 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.476 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.476 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3260144 00:34:12.476 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:12.476 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:12.476 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3260144 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3260144 ']' 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.476 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.477 [2024-12-06 15:51:17.813054] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:34:12.477 [2024-12-06 15:51:17.813105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.477 [2024-12-06 15:51:17.872744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:12.477 [2024-12-06 15:51:17.915839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.477 [2024-12-06 15:51:17.915878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.477 [2024-12-06 15:51:17.915885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.477 [2024-12-06 15:51:17.915891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.477 [2024-12-06 15:51:17.915896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.477 [2024-12-06 15:51:17.917359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.477 [2024-12-06 15:51:17.917471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:12.477 [2024-12-06 15:51:17.917503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.477 [2024-12-06 15:51:17.917504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:12.477 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.477 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:12.477 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:12.477 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.477 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.477 INFO: Log level set to 20 00:34:12.477 INFO: Requests: 00:34:12.477 { 00:34:12.477 "jsonrpc": "2.0", 00:34:12.477 "method": "nvmf_set_config", 00:34:12.477 "id": 1, 00:34:12.477 "params": { 00:34:12.477 "admin_cmd_passthru": { 00:34:12.477 "identify_ctrlr": true 00:34:12.477 } 00:34:12.477 } 00:34:12.477 } 00:34:12.477 00:34:12.477 INFO: response: 00:34:12.477 { 00:34:12.477 "jsonrpc": "2.0", 00:34:12.477 "id": 1, 00:34:12.477 "result": true 00:34:12.477 } 00:34:12.477 00:34:12.477 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.477 15:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:12.477 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.477 15:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.477 INFO: Setting log level to 20 00:34:12.477 INFO: Setting log level to 20 00:34:12.477 INFO: Log level set to 20 00:34:12.477 INFO: Log level set to 20 00:34:12.477 INFO: Requests: 00:34:12.477 { 00:34:12.477 "jsonrpc": "2.0", 00:34:12.477 "method": "framework_start_init", 00:34:12.477 "id": 1 00:34:12.477 } 00:34:12.477 00:34:12.477 INFO: Requests: 00:34:12.477 { 00:34:12.477 "jsonrpc": "2.0", 00:34:12.477 "method": "framework_start_init", 00:34:12.477 "id": 1 00:34:12.477 } 00:34:12.477 00:34:12.477 [2024-12-06 15:51:18.038927] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:12.477 INFO: response: 00:34:12.477 { 00:34:12.477 "jsonrpc": "2.0", 00:34:12.477 "id": 1, 00:34:12.477 "result": true 00:34:12.477 } 00:34:12.477 00:34:12.477 INFO: response: 00:34:12.477 { 00:34:12.477 "jsonrpc": "2.0", 00:34:12.477 "id": 1, 00:34:12.477 "result": true 00:34:12.477 } 00:34:12.477 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.477 15:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.477 INFO: Setting log level to 40 00:34:12.477 INFO: Setting log level to 40 00:34:12.477 INFO: Setting log level to 40 00:34:12.477 [2024-12-06 15:51:18.052230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.477 15:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:12.477 15:51:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.477 15:51:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.013 Nvme0n1 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.013 15:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.013 15:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.013 15:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.013 [2024-12-06 15:51:20.968712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.013 15:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.013 [ 00:34:15.013 { 00:34:15.013 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:15.013 "subtype": "Discovery", 00:34:15.013 "listen_addresses": [], 00:34:15.013 "allow_any_host": true, 00:34:15.013 "hosts": [] 00:34:15.013 }, 00:34:15.013 { 00:34:15.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.013 "subtype": "NVMe", 00:34:15.013 "listen_addresses": [ 00:34:15.013 { 00:34:15.013 "trtype": "TCP", 00:34:15.013 "adrfam": "IPv4", 00:34:15.013 "traddr": "10.0.0.2", 00:34:15.013 "trsvcid": "4420" 00:34:15.013 } 00:34:15.013 ], 00:34:15.013 "allow_any_host": true, 00:34:15.013 "hosts": [], 00:34:15.013 "serial_number": "SPDK00000000000001", 00:34:15.013 "model_number": "SPDK bdev Controller", 00:34:15.013 "max_namespaces": 1, 00:34:15.013 "min_cntlid": 1, 00:34:15.013 "max_cntlid": 65519, 00:34:15.013 "namespaces": [ 00:34:15.013 { 00:34:15.013 "nsid": 1, 00:34:15.013 "bdev_name": "Nvme0n1", 00:34:15.013 "name": "Nvme0n1", 00:34:15.013 "nguid": "D5693FDCC0A04AA6A5041F78581329D6", 00:34:15.013 "uuid": "d5693fdc-c0a0-4aa6-a504-1f78581329d6" 00:34:15.013 } 00:34:15.013 ] 00:34:15.013 } 00:34:15.013 ] 00:34:15.013 15:51:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.013 15:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:15.013 15:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:15.013 15:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:15.272 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:34:15.272 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:15.272 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:15.272 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:15.531 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:15.532 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:34:15.532 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:15.532 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.532 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:15.532 15:51:21 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.532 rmmod nvme_tcp 00:34:15.532 rmmod nvme_fabrics 00:34:15.532 rmmod nvme_keyring 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3260144 ']' 00:34:15.532 15:51:21 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3260144 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3260144 ']' 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3260144 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3260144 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3260144' 00:34:15.532 killing process with pid 3260144 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3260144 00:34:15.532 15:51:21 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3260144 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:18.065 15:51:23 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.065 15:51:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:18.065 15:51:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.969 15:51:25 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.969 00:34:19.969 real 0m23.480s 00:34:19.969 user 0m29.853s 00:34:19.969 sys 0m6.330s 00:34:19.969 15:51:25 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.969 15:51:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:19.969 ************************************ 00:34:19.969 END TEST nvmf_identify_passthru 00:34:19.969 ************************************ 00:34:19.969 15:51:25 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:19.969 15:51:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:19.969 15:51:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.969 15:51:25 -- common/autotest_common.sh@10 -- # set +x 00:34:19.969 ************************************ 00:34:19.969 START TEST nvmf_dif 00:34:19.969 ************************************ 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:19.969 * Looking for test storage... 00:34:19.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.969 15:51:25 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.969 --rc genhtml_branch_coverage=1 00:34:19.969 --rc genhtml_function_coverage=1 00:34:19.969 --rc genhtml_legend=1 00:34:19.969 --rc geninfo_all_blocks=1 00:34:19.969 --rc geninfo_unexecuted_blocks=1 00:34:19.969 00:34:19.969 ' 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.969 --rc genhtml_branch_coverage=1 00:34:19.969 --rc genhtml_function_coverage=1 00:34:19.969 --rc genhtml_legend=1 00:34:19.969 --rc geninfo_all_blocks=1 00:34:19.969 --rc geninfo_unexecuted_blocks=1 00:34:19.969 00:34:19.969 ' 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.969 --rc genhtml_branch_coverage=1 00:34:19.969 --rc genhtml_function_coverage=1 00:34:19.969 --rc genhtml_legend=1 00:34:19.969 --rc geninfo_all_blocks=1 00:34:19.969 --rc geninfo_unexecuted_blocks=1 00:34:19.969 00:34:19.969 ' 00:34:19.969 15:51:25 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:19.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.969 --rc genhtml_branch_coverage=1 00:34:19.969 --rc genhtml_function_coverage=1 00:34:19.969 --rc genhtml_legend=1 00:34:19.969 --rc geninfo_all_blocks=1 00:34:19.969 --rc geninfo_unexecuted_blocks=1 00:34:19.969 00:34:19.969 ' 00:34:19.969 15:51:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.969 15:51:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:19.969 15:51:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.969 15:51:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.969 15:51:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.970 15:51:25 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.970 15:51:25 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.970 15:51:25 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.970 15:51:25 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.970 15:51:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.970 15:51:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.970 15:51:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.970 15:51:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:19.970 15:51:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:19.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.970 15:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:19.970 15:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:19.970 15:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:19.970 15:51:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:19.970 15:51:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.970 15:51:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:19.970 15:51:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.970 15:51:25 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.970 15:51:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:26.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:26.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:26.537 Found net devices under 0000:86:00.0: cvl_0_0 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:26.537 Found net devices under 0000:86:00.1: cvl_0_1 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:34:26.537 00:34:26.537 --- 10.0.0.2 ping statistics --- 00:34:26.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.537 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:34:26.537 00:34:26.537 --- 10.0.0.1 ping statistics --- 00:34:26.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.537 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.537 15:51:31 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:26.538 15:51:31 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:26.538 15:51:31 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:28.443 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:28.443 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:28.443 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:28.701 15:51:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:28.701 15:51:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3265824 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3265824 00:34:28.701 15:51:34 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3265824 ']' 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:28.701 15:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.701 [2024-12-06 15:51:34.659959] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:34:28.701 [2024-12-06 15:51:34.660012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.960 [2024-12-06 15:51:34.738051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.960 [2024-12-06 15:51:34.779393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.960 [2024-12-06 15:51:34.779429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.960 [2024-12-06 15:51:34.779436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.960 [2024-12-06 15:51:34.779442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.960 [2024-12-06 15:51:34.779448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.960 [2024-12-06 15:51:34.779998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:28.960 15:51:34 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.960 15:51:34 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.960 15:51:34 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:28.960 15:51:34 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.960 [2024-12-06 15:51:34.908468] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.960 15:51:34 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:28.960 15:51:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.960 ************************************ 00:34:28.960 START TEST fio_dif_1_default 00:34:28.960 ************************************ 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.960 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.217 bdev_null0 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.217 [2024-12-06 15:51:34.980784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:29.217 { 00:34:29.217 "params": { 00:34:29.217 "name": "Nvme$subsystem", 00:34:29.217 "trtype": "$TEST_TRANSPORT", 00:34:29.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.217 "adrfam": "ipv4", 00:34:29.217 "trsvcid": "$NVMF_PORT", 00:34:29.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.217 "hdgst": ${hdgst:-false}, 00:34:29.217 "ddgst": ${ddgst:-false} 00:34:29.217 }, 00:34:29.217 "method": "bdev_nvme_attach_controller" 00:34:29.217 } 00:34:29.217 EOF 00:34:29.217 )") 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:29.217 15:51:34 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:29.217 "params": { 00:34:29.217 "name": "Nvme0", 00:34:29.217 "trtype": "tcp", 00:34:29.217 "traddr": "10.0.0.2", 00:34:29.217 "adrfam": "ipv4", 00:34:29.217 "trsvcid": "4420", 00:34:29.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.217 "hdgst": false, 00:34:29.217 "ddgst": false 00:34:29.217 }, 00:34:29.217 "method": "bdev_nvme_attach_controller" 00:34:29.218 }' 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.218 15:51:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.474 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.474 fio-3.35 00:34:29.474 Starting 1 thread 00:34:41.659 00:34:41.659 filename0: (groupid=0, jobs=1): err= 0: pid=3266136: Fri Dec 6 15:51:46 2024 00:34:41.659 read: IOPS=216, BW=867KiB/s (888kB/s)(8704KiB/10036msec) 00:34:41.659 slat (nsec): min=5799, max=26940, avg=6209.00, stdev=1184.10 00:34:41.659 clat (usec): min=375, max=44030, avg=18429.72, stdev=20324.47 00:34:41.659 lat (usec): min=380, max=44057, avg=18435.93, stdev=20324.37 00:34:41.659 clat percentiles (usec): 00:34:41.659 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 404], 20.00th=[ 416], 00:34:41.659 | 30.00th=[ 424], 40.00th=[ 445], 50.00th=[ 553], 60.00th=[40633], 00:34:41.659 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:41.659 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:34:41.659 | 99.99th=[43779] 00:34:41.659 bw ( KiB/s): min= 672, max= 1088, per=100.00%, avg=868.80, stdev=114.80, samples=20 00:34:41.659 iops : min= 168, max= 272, avg=217.20, stdev=28.70, samples=20 00:34:41.659 lat (usec) : 500=47.20%, 750=8.87% 00:34:41.659 lat (msec) : 50=43.93% 00:34:41.659 cpu : usr=92.05%, sys=7.69%, ctx=13, majf=0, minf=0 00:34:41.659 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.659 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.659 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:41.659 00:34:41.659 Run status group 0 (all jobs): 00:34:41.659 READ: bw=867KiB/s (888kB/s), 867KiB/s-867KiB/s (888kB/s-888kB/s), io=8704KiB (8913kB), run=10036-10036msec 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 00:34:41.659 real 0m11.257s 00:34:41.659 user 0m16.085s 00:34:41.659 sys 0m1.059s 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 ************************************ 00:34:41.659 END TEST fio_dif_1_default 00:34:41.659 ************************************ 00:34:41.659 15:51:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:41.659 15:51:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:41.659 15:51:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 ************************************ 00:34:41.659 START TEST fio_dif_1_multi_subsystems 00:34:41.659 ************************************ 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 bdev_null0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 [2024-12-06 15:51:46.310840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 bdev_null1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.659 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.659 { 00:34:41.659 "params": { 00:34:41.659 "name": "Nvme$subsystem", 00:34:41.659 "trtype": "$TEST_TRANSPORT", 00:34:41.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.659 "adrfam": "ipv4", 00:34:41.659 "trsvcid": "$NVMF_PORT", 00:34:41.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.660 "hdgst": ${hdgst:-false}, 00:34:41.660 "ddgst": ${ddgst:-false} 00:34:41.660 }, 00:34:41.660 "method": "bdev_nvme_attach_controller" 00:34:41.660 } 00:34:41.660 EOF 00:34:41.660 )") 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.660 { 00:34:41.660 "params": { 00:34:41.660 "name": "Nvme$subsystem", 00:34:41.660 "trtype": "$TEST_TRANSPORT", 00:34:41.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.660 "adrfam": "ipv4", 00:34:41.660 "trsvcid": "$NVMF_PORT", 00:34:41.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.660 "hdgst": ${hdgst:-false}, 00:34:41.660 "ddgst": ${ddgst:-false} 00:34:41.660 }, 00:34:41.660 "method": "bdev_nvme_attach_controller" 00:34:41.660 } 00:34:41.660 EOF 00:34:41.660 )") 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:41.660 "params": { 00:34:41.660 "name": "Nvme0", 00:34:41.660 "trtype": "tcp", 00:34:41.660 "traddr": "10.0.0.2", 00:34:41.660 "adrfam": "ipv4", 00:34:41.660 "trsvcid": "4420", 00:34:41.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.660 "hdgst": false, 00:34:41.660 "ddgst": false 00:34:41.660 }, 00:34:41.660 "method": "bdev_nvme_attach_controller" 00:34:41.660 },{ 00:34:41.660 "params": { 00:34:41.660 "name": "Nvme1", 00:34:41.660 "trtype": "tcp", 00:34:41.660 "traddr": "10.0.0.2", 00:34:41.660 "adrfam": "ipv4", 00:34:41.660 "trsvcid": "4420", 00:34:41.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.660 "hdgst": false, 00:34:41.660 "ddgst": false 00:34:41.660 }, 00:34:41.660 "method": "bdev_nvme_attach_controller" 00:34:41.660 }' 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.660 15:51:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.660 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.660 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.660 fio-3.35 00:34:41.660 Starting 2 threads 00:34:51.637 00:34:51.638 filename0: (groupid=0, jobs=1): err= 0: pid=3268010: Fri Dec 6 15:51:57 2024 00:34:51.638 read: IOPS=188, BW=754KiB/s (772kB/s)(7552KiB/10020msec) 00:34:51.638 slat (nsec): min=5863, max=34797, avg=7148.96, stdev=2266.05 00:34:51.638 clat (usec): min=465, max=42495, avg=21207.83, stdev=20425.89 00:34:51.638 lat (usec): min=471, max=42501, avg=21214.98, stdev=20425.29 00:34:51.638 clat percentiles (usec): 00:34:51.638 | 1.00th=[ 478], 5.00th=[ 494], 10.00th=[ 578], 20.00th=[ 611], 00:34:51.638 | 30.00th=[ 619], 40.00th=[ 627], 50.00th=[41157], 60.00th=[41157], 00:34:51.638 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:34:51.638 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:51.638 | 99.99th=[42730] 00:34:51.638 bw ( KiB/s): min= 672, max= 768, per=49.17%, avg=753.60, stdev=28.39, samples=20 00:34:51.638 iops : min= 168, max= 192, avg=188.40, stdev= 7.10, samples=20 00:34:51.638 lat (usec) : 500=7.47%, 750=39.09%, 1000=3.02% 00:34:51.638 lat (msec) : 50=50.42% 00:34:51.638 cpu : usr=96.34%, sys=3.41%, ctx=6, majf=0, minf=115 00:34:51.638 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.638 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.638 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:51.638 filename1: (groupid=0, jobs=1): err= 0: pid=3268011: Fri Dec 6 15:51:57 2024 00:34:51.638 read: IOPS=194, BW=778KiB/s (797kB/s)(7792KiB/10014msec) 00:34:51.638 slat (nsec): min=5900, max=32616, avg=7148.50, stdev=2169.95 00:34:51.638 clat (usec): min=378, max=42584, avg=20541.69, stdev=20361.28 00:34:51.638 lat (usec): min=384, max=42591, avg=20548.84, stdev=20360.66 00:34:51.638 clat percentiles (usec): 00:34:51.638 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 424], 00:34:51.638 | 30.00th=[ 441], 40.00th=[ 578], 50.00th=[ 963], 60.00th=[40633], 00:34:51.638 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:51.638 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:51.638 | 99.99th=[42730] 00:34:51.638 bw ( KiB/s): min= 704, max= 832, per=50.74%, avg=777.60, stdev=36.11, samples=20 00:34:51.638 iops : min= 176, max= 208, avg=194.40, stdev= 9.03, samples=20 00:34:51.638 lat (usec) : 500=37.63%, 750=11.86%, 1000=0.92% 00:34:51.638 lat (msec) : 2=0.31%, 50=49.28% 00:34:51.638 cpu : usr=96.34%, sys=3.42%, ctx=14, majf=0, minf=62 00:34:51.638 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.638 issued rwts: total=1948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.638 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:51.638 00:34:51.638 Run status group 0 (all jobs): 00:34:51.638 READ: bw=1531KiB/s (1568kB/s), 754KiB/s-778KiB/s (772kB/s-797kB/s), io=15.0MiB (15.7MB), run=10014-10020msec 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.638 00:34:51.638 real 0m11.313s 00:34:51.638 user 0m26.138s 00:34:51.638 sys 0m1.006s 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.638 15:51:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.638 ************************************ 00:34:51.638 END TEST fio_dif_1_multi_subsystems 00:34:51.638 ************************************ 00:34:51.638 15:51:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:51.638 15:51:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:51.638 15:51:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.638 15:51:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.898 ************************************ 00:34:51.898 START TEST fio_dif_rand_params 00:34:51.898 ************************************ 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.898 bdev_null0 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.898 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.899 [2024-12-06 15:51:57.694974] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:51.899 { 00:34:51.899 "params": { 00:34:51.899 "name": "Nvme$subsystem", 00:34:51.899 "trtype": "$TEST_TRANSPORT", 00:34:51.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.899 "adrfam": "ipv4", 00:34:51.899 "trsvcid": "$NVMF_PORT", 00:34:51.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.899 "hdgst": ${hdgst:-false}, 00:34:51.899 "ddgst": ${ddgst:-false} 00:34:51.899 }, 00:34:51.899 "method": "bdev_nvme_attach_controller" 00:34:51.899 } 00:34:51.899 EOF 00:34:51.899 )") 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:51.899 "params": { 00:34:51.899 "name": "Nvme0", 00:34:51.899 "trtype": "tcp", 00:34:51.899 "traddr": "10.0.0.2", 00:34:51.899 "adrfam": "ipv4", 00:34:51.899 "trsvcid": "4420", 00:34:51.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.899 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.899 "hdgst": false, 00:34:51.899 "ddgst": false 00:34:51.899 }, 00:34:51.899 "method": "bdev_nvme_attach_controller" 00:34:51.899 }' 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.899 15:51:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.158 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:52.158 ... 00:34:52.158 fio-3.35 00:34:52.158 Starting 3 threads 00:34:58.715 00:34:58.715 filename0: (groupid=0, jobs=1): err= 0: pid=3269912: Fri Dec 6 15:52:03 2024 00:34:58.715 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(185MiB/5043msec) 00:34:58.715 slat (nsec): min=6112, max=26160, avg=11296.84, stdev=2041.62 00:34:58.715 clat (usec): min=5358, max=53632, avg=10180.73, stdev=5458.49 00:34:58.715 lat (usec): min=5371, max=53646, avg=10192.02, stdev=5458.58 00:34:58.715 clat percentiles (usec): 00:34:58.715 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 8225], 00:34:58.715 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:34:58.715 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[12125], 00:34:58.715 | 99.00th=[49021], 99.50th=[50594], 99.90th=[52691], 99.95th=[53740], 00:34:58.715 | 99.99th=[53740] 00:34:58.715 bw ( KiB/s): min=33024, max=41472, per=32.26%, avg=37836.80, stdev=2632.36, samples=10 00:34:58.715 iops : min= 258, max= 324, avg=295.60, stdev=20.57, samples=10 00:34:58.715 lat (msec) : 10=58.65%, 20=39.59%, 50=1.15%, 100=0.61% 00:34:58.715 cpu : usr=94.86%, sys=4.82%, ctx=12, majf=0, minf=30 00:34:58.715 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.715 issued rwts: total=1480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.715 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.715 filename0: (groupid=0, jobs=1): err= 0: pid=3269913: Fri Dec 6 15:52:03 2024 00:34:58.715 read: IOPS=321, BW=40.2MiB/s (42.1MB/s)(201MiB/5005msec) 00:34:58.715 slat (nsec): min=6121, max=26366, avg=10676.36, stdev=2242.54 00:34:58.715 clat (usec): min=3720, max=52014, avg=9322.19, stdev=5356.48 00:34:58.715 lat (usec): min=3727, max=52022, avg=9332.87, stdev=5356.56 00:34:58.715 clat percentiles (usec): 00:34:58.715 | 1.00th=[ 4490], 5.00th=[ 6259], 10.00th=[ 6783], 20.00th=[ 7767], 00:34:58.715 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:34:58.715 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10683], 00:34:58.715 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[52167], 00:34:58.715 | 99.99th=[52167] 00:34:58.715 bw ( KiB/s): min=36608, max=45056, per=35.05%, avg=41113.60, stdev=2574.75, samples=10 00:34:58.715 iops : min= 286, max= 352, avg=321.20, stdev=20.12, samples=10 00:34:58.715 lat (msec) : 4=0.25%, 10=84.14%, 20=13.93%, 50=1.31%, 100=0.37% 00:34:58.715 cpu : usr=94.44%, sys=5.26%, ctx=9, majf=0, minf=74 00:34:58.715 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.715 issued rwts: total=1608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.715 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.715 filename0: (groupid=0, jobs=1): err= 0: pid=3269914: Fri Dec 6 15:52:03 2024 00:34:58.715 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(192MiB/5043msec) 00:34:58.715 slat (nsec): min=6109, max=26228, avg=11176.23, stdev=1941.54 00:34:58.715 clat (usec): min=4372, max=50845, avg=9829.37, stdev=4394.64 00:34:58.715 lat (usec): min=4379, max=50857, avg=9840.54, stdev=4394.72 00:34:58.715 clat percentiles (usec): 00:34:58.715 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 8094], 00:34:58.715 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:34:58.715 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11600], 95.00th=[12256], 00:34:58.715 | 99.00th=[45876], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:34:58.715 | 99.99th=[50594] 00:34:58.715 bw ( KiB/s): min=35584, max=41216, per=33.42%, avg=39193.60, stdev=1824.02, samples=10 00:34:58.716 iops : min= 278, max= 322, avg=306.20, stdev=14.25, samples=10 00:34:58.716 lat (msec) : 10=64.45%, 20=34.44%, 50=0.91%, 100=0.20% 00:34:58.716 cpu : usr=94.88%, sys=4.80%, ctx=7, majf=0, minf=39 00:34:58.716 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.716 issued rwts: total=1533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.716 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.716 00:34:58.716 Run status group 0 (all jobs): 00:34:58.716 READ: bw=115MiB/s (120MB/s), 36.7MiB/s-40.2MiB/s (38.5MB/s-42.1MB/s), io=578MiB (606MB), run=5005-5043msec 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 bdev_null0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 [2024-12-06 15:52:03.843756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 bdev_null1 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 bdev_null2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.716 { 00:34:58.716 "params": { 00:34:58.716 "name": "Nvme$subsystem", 00:34:58.716 "trtype": "$TEST_TRANSPORT", 00:34:58.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.716 "adrfam": "ipv4", 00:34:58.716 "trsvcid": "$NVMF_PORT", 00:34:58.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.716 "hdgst": ${hdgst:-false}, 00:34:58.716 "ddgst": ${ddgst:-false} 00:34:58.716 }, 00:34:58.716 "method": "bdev_nvme_attach_controller" 00:34:58.716 } 00:34:58.716 EOF 00:34:58.716 )") 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.716 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.717 { 00:34:58.717 "params": { 00:34:58.717 "name": "Nvme$subsystem", 00:34:58.717 "trtype": "$TEST_TRANSPORT", 00:34:58.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.717 "adrfam": "ipv4", 00:34:58.717 "trsvcid": "$NVMF_PORT", 00:34:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.717 "hdgst": ${hdgst:-false}, 00:34:58.717 "ddgst": ${ddgst:-false} 00:34:58.717 }, 00:34:58.717 "method": "bdev_nvme_attach_controller" 00:34:58.717 } 00:34:58.717 EOF 00:34:58.717 )") 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.717 { 00:34:58.717 "params": { 00:34:58.717 "name": "Nvme$subsystem", 00:34:58.717 "trtype": "$TEST_TRANSPORT", 00:34:58.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.717 "adrfam": "ipv4", 00:34:58.717 "trsvcid": "$NVMF_PORT", 00:34:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.717 "hdgst": ${hdgst:-false}, 00:34:58.717 "ddgst": ${ddgst:-false} 00:34:58.717 }, 00:34:58.717 "method": "bdev_nvme_attach_controller" 00:34:58.717 } 00:34:58.717 EOF 00:34:58.717 )") 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:58.717 "params": { 00:34:58.717 "name": "Nvme0", 00:34:58.717 "trtype": "tcp", 00:34:58.717 "traddr": "10.0.0.2", 00:34:58.717 "adrfam": "ipv4", 00:34:58.717 "trsvcid": "4420", 00:34:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.717 "hdgst": false, 00:34:58.717 "ddgst": false 00:34:58.717 }, 00:34:58.717 "method": "bdev_nvme_attach_controller" 00:34:58.717 },{ 00:34:58.717 "params": { 00:34:58.717 "name": "Nvme1", 00:34:58.717 "trtype": "tcp", 00:34:58.717 "traddr": "10.0.0.2", 00:34:58.717 "adrfam": "ipv4", 00:34:58.717 "trsvcid": "4420", 00:34:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.717 "hdgst": false, 00:34:58.717 "ddgst": false 00:34:58.717 }, 00:34:58.717 "method": "bdev_nvme_attach_controller" 00:34:58.717 },{ 00:34:58.717 "params": { 00:34:58.717 "name": "Nvme2", 00:34:58.717 "trtype": "tcp", 00:34:58.717 "traddr": "10.0.0.2", 00:34:58.717 "adrfam": "ipv4", 00:34:58.717 "trsvcid": "4420", 00:34:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:58.717 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:58.717 "hdgst": false, 00:34:58.717 "ddgst": false 00:34:58.717 }, 00:34:58.717 "method": "bdev_nvme_attach_controller" 00:34:58.717 }' 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:58.717 15:52:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.717 15:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.717 15:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.717 15:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.717 15:52:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.717 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.717 ... 00:34:58.717 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.717 ... 00:34:58.717 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.717 ... 00:34:58.717 fio-3.35 00:34:58.717 Starting 24 threads 00:35:10.910 00:35:10.910 filename0: (groupid=0, jobs=1): err= 0: pid=3271175: Fri Dec 6 15:52:15 2024 00:35:10.910 read: IOPS=593, BW=2375KiB/s (2432kB/s)(23.2MiB/10023msec) 00:35:10.910 slat (nsec): min=6694, max=82154, avg=35895.85, stdev=15012.79 00:35:10.910 clat (usec): min=14156, max=37383, avg=26640.20, stdev=1995.07 00:35:10.910 lat (usec): min=14207, max=37400, avg=26676.09, stdev=1998.52 00:35:10.910 clat percentiles (usec): 00:35:10.910 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:35:10.910 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.910 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.910 | 99.00th=[30540], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:35:10.910 | 99.99th=[37487] 00:35:10.910 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2374.15, stdev=132.79, samples=20 00:35:10.910 iops : min= 544, max= 640, avg=593.50, stdev=33.18, samples=20 00:35:10.910 lat (msec) : 20=0.37%, 50=99.63% 00:35:10.910 cpu : usr=98.62%, sys=0.98%, ctx=25, majf=0, minf=9 00:35:10.910 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:35:10.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.910 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.910 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.910 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.910 filename0: (groupid=0, jobs=1): err= 0: pid=3271176: Fri Dec 6 15:52:15 2024 00:35:10.910 read: IOPS=593, BW=2375KiB/s (2432kB/s)(23.2MiB/10023msec) 00:35:10.910 slat (nsec): min=6815, max=82095, avg=34003.23, stdev=13569.41 00:35:10.910 clat (usec): min=14536, max=33587, avg=26640.73, stdev=1940.36 00:35:10.910 lat (usec): min=14555, max=33631, avg=26674.74, stdev=1945.12 00:35:10.910 clat percentiles (usec): 00:35:10.910 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:35:10.910 | 30.00th=[25297], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:35:10.910 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.910 | 99.00th=[30540], 99.50th=[30802], 99.90th=[33424], 99.95th=[33424], 00:35:10.910 | 99.99th=[33817] 00:35:10.910 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2374.15, stdev=134.30, samples=20 00:35:10.910 iops : min= 544, max= 640, avg=593.50, stdev=33.56, samples=20 00:35:10.910 lat (msec) : 20=0.27%, 50=99.73% 00:35:10.910 cpu : usr=98.64%, sys=0.97%, ctx=19, majf=0, minf=9 00:35:10.910 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.910 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.910 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename0: (groupid=0, jobs=1): err= 0: pid=3271177: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=600, BW=2402KiB/s (2460kB/s)(23.5MiB/10017msec) 00:35:10.911 slat (nsec): min=6656, max=81640, avg=29586.05, stdev=16882.13 00:35:10.911 clat (usec): min=1932, max=34909, avg=26387.36, stdev=2899.82 00:35:10.911 lat (usec): min=1953, max=34923, avg=26416.95, stdev=2904.60 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[10683], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:35:10.911 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.911 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.911 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31327], 99.95th=[34341], 00:35:10.911 | 99.99th=[34866] 00:35:10.911 bw ( KiB/s): min= 2048, max= 3072, per=4.22%, avg=2399.70, stdev=215.06, samples=20 00:35:10.911 iops : min= 512, max= 768, avg=599.90, stdev=53.75, samples=20 00:35:10.911 lat (msec) : 2=0.07%, 4=0.20%, 10=0.57%, 20=1.33%, 50=97.84% 00:35:10.911 cpu : usr=98.53%, sys=1.08%, ctx=13, majf=0, minf=9 00:35:10.911 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename0: (groupid=0, jobs=1): err= 0: pid=3271178: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10018msec) 00:35:10.911 slat (nsec): min=6680, max=84869, avg=37135.60, stdev=15724.17 00:35:10.911 clat (usec): min=13659, max=41695, avg=26590.86, stdev=2028.41 00:35:10.911 lat (usec): min=13668, max=41721, avg=26627.99, stdev=2030.18 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:10.911 | 30.00th=[25297], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:35:10.911 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.911 | 99.00th=[30540], 99.50th=[30802], 99.90th=[33817], 99.95th=[33817], 00:35:10.911 | 99.99th=[41681] 00:35:10.911 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2384.58, stdev=122.16, samples=19 00:35:10.911 iops : min= 544, max= 640, avg=596.11, stdev=30.52, samples=19 00:35:10.911 lat (msec) : 20=0.57%, 50=99.43% 00:35:10.911 cpu : usr=98.61%, sys=0.99%, ctx=34, majf=0, minf=9 00:35:10.911 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename0: (groupid=0, jobs=1): err= 0: pid=3271179: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10001msec) 00:35:10.911 slat (usec): min=4, max=111, avg=52.14, stdev=19.24 00:35:10.911 clat (usec): min=11219, max=40810, avg=26481.46, stdev=2140.35 00:35:10.911 lat (usec): min=11238, max=40824, avg=26533.60, stdev=2141.97 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:35:10.911 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.911 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:35:10.911 | 99.00th=[30540], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:35:10.911 | 99.99th=[40633] 00:35:10.911 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2378.11, stdev=137.04, samples=19 00:35:10.911 iops : min= 544, max= 640, avg=594.53, stdev=34.26, samples=19 00:35:10.911 lat (msec) : 20=0.27%, 50=99.73% 00:35:10.911 cpu : usr=98.69%, sys=0.84%, ctx=39, majf=0, minf=9 00:35:10.911 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename0: (groupid=0, jobs=1): err= 0: pid=3271180: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10001msec) 00:35:10.911 slat (usec): min=5, max=111, avg=50.79, stdev=19.66 00:35:10.911 clat (usec): min=11269, max=41192, avg=26490.35, stdev=2155.86 00:35:10.911 lat (usec): min=11281, max=41209, avg=26541.14, stdev=2157.75 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[22676], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:35:10.911 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.911 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:35:10.911 | 99.00th=[30540], 99.50th=[31065], 99.90th=[41157], 99.95th=[41157], 00:35:10.911 | 99.99th=[41157] 00:35:10.911 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2378.11, stdev=137.04, samples=19 00:35:10.911 iops : min= 544, max= 640, avg=594.53, stdev=34.26, samples=19 00:35:10.911 lat (msec) : 20=0.27%, 50=99.73% 00:35:10.911 cpu : usr=98.74%, sys=0.84%, ctx=55, majf=0, minf=9 00:35:10.911 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename0: (groupid=0, jobs=1): err= 0: pid=3271181: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=595, BW=2383KiB/s (2441kB/s)(23.3MiB/10009msec) 00:35:10.911 slat (usec): min=6, max=114, avg=39.24, stdev=20.34 00:35:10.911 clat (usec): min=10478, max=43442, avg=26533.60, stdev=3039.53 00:35:10.911 lat (usec): min=10487, max=43509, avg=26572.84, stdev=3044.24 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[14877], 5.00th=[22938], 10.00th=[24249], 20.00th=[24773], 00:35:10.911 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.911 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30278], 00:35:10.911 | 99.00th=[36439], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:35:10.911 | 99.99th=[43254] 00:35:10.911 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2389.89, stdev=132.71, samples=19 00:35:10.911 iops : min= 544, max= 640, avg=597.47, stdev=33.18, samples=19 00:35:10.911 lat (msec) : 20=2.82%, 50=97.18% 00:35:10.911 cpu : usr=98.59%, sys=0.96%, ctx=88, majf=0, minf=9 00:35:10.911 IO depths : 1=1.9%, 2=7.7%, 4=23.4%, 8=56.1%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 complete : 0=0.0%, 4=94.0%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 issued rwts: total=5964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename0: (groupid=0, jobs=1): err= 0: pid=3271182: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10006msec) 00:35:10.911 slat (usec): min=6, max=109, avg=46.00, stdev=20.93 00:35:10.911 clat (usec): min=20007, max=38566, avg=26620.68, stdev=1892.52 00:35:10.911 lat (usec): min=20019, max=38587, avg=26666.69, stdev=1892.95 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:35:10.911 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.911 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.911 | 99.00th=[30540], 99.50th=[30802], 99.90th=[34341], 99.95th=[34341], 00:35:10.911 | 99.99th=[38536] 00:35:10.911 bw ( KiB/s): min= 2176, max= 2565, per=4.18%, avg=2378.05, stdev=130.16, samples=19 00:35:10.911 iops : min= 544, max= 641, avg=594.47, stdev=32.48, samples=19 00:35:10.911 lat (msec) : 50=100.00% 00:35:10.911 cpu : usr=98.30%, sys=1.11%, ctx=66, majf=0, minf=9 00:35:10.911 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename1: (groupid=0, jobs=1): err= 0: pid=3271183: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10007msec) 00:35:10.911 slat (usec): min=4, max=117, avg=30.69, stdev=22.28 00:35:10.911 clat (usec): min=11219, max=44086, avg=26750.54, stdev=2828.18 00:35:10.911 lat (usec): min=11227, max=44141, avg=26781.23, stdev=2827.50 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[19792], 5.00th=[23725], 10.00th=[24511], 20.00th=[24773], 00:35:10.911 | 30.00th=[25035], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:35:10.911 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:35:10.911 | 99.00th=[39060], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:35:10.911 | 99.99th=[44303] 00:35:10.911 bw ( KiB/s): min= 2160, max= 2560, per=4.19%, avg=2382.32, stdev=124.26, samples=19 00:35:10.911 iops : min= 540, max= 640, avg=595.53, stdev=31.03, samples=19 00:35:10.911 lat (msec) : 20=1.13%, 50=98.87% 00:35:10.911 cpu : usr=98.07%, sys=1.19%, ctx=147, majf=0, minf=9 00:35:10.911 IO depths : 1=0.1%, 2=3.1%, 4=12.9%, 8=68.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:35:10.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 complete : 0=0.0%, 4=91.9%, 8=5.2%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.911 issued rwts: total=5940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.911 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.911 filename1: (groupid=0, jobs=1): err= 0: pid=3271184: Fri Dec 6 15:52:15 2024 00:35:10.911 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10001msec) 00:35:10.911 slat (usec): min=5, max=109, avg=56.22, stdev=20.28 00:35:10.911 clat (usec): min=11174, max=45472, avg=26491.05, stdev=2145.22 00:35:10.911 lat (usec): min=11240, max=45487, avg=26547.27, stdev=2146.19 00:35:10.911 clat percentiles (usec): 00:35:10.911 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:35:10.911 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:35:10.911 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:35:10.911 | 99.00th=[30540], 99.50th=[30802], 99.90th=[41157], 99.95th=[41157], 00:35:10.911 | 99.99th=[45351] 00:35:10.911 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2378.11, stdev=137.04, samples=19 00:35:10.912 iops : min= 544, max= 640, avg=594.53, stdev=34.26, samples=19 00:35:10.912 lat (msec) : 20=0.27%, 50=99.73% 00:35:10.912 cpu : usr=98.59%, sys=1.02%, ctx=15, majf=0, minf=9 00:35:10.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename1: (groupid=0, jobs=1): err= 0: pid=3271185: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=594, BW=2378KiB/s (2436kB/s)(23.2MiB/10010msec) 00:35:10.912 slat (usec): min=7, max=114, avg=40.07, stdev=20.46 00:35:10.912 clat (usec): min=12535, max=31435, avg=26583.21, stdev=2059.50 00:35:10.912 lat (usec): min=12544, max=31457, avg=26623.29, stdev=2061.84 00:35:10.912 clat percentiles (usec): 00:35:10.912 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:35:10.912 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.912 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.912 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:35:10.912 | 99.99th=[31327] 00:35:10.912 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2384.53, stdev=142.45, samples=19 00:35:10.912 iops : min= 544, max= 672, avg=596.11, stdev=35.58, samples=19 00:35:10.912 lat (msec) : 20=0.54%, 50=99.46% 00:35:10.912 cpu : usr=98.62%, sys=0.97%, ctx=33, majf=0, minf=11 00:35:10.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename1: (groupid=0, jobs=1): err= 0: pid=3271186: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10005msec) 00:35:10.912 slat (usec): min=7, max=211, avg=47.85, stdev=24.38 00:35:10.912 clat (usec): min=19072, max=38509, avg=26599.41, stdev=1892.47 00:35:10.912 lat (usec): min=19087, max=38531, avg=26647.27, stdev=1892.83 00:35:10.912 clat percentiles (usec): 00:35:10.912 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:35:10.912 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.912 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.912 | 99.00th=[30540], 99.50th=[30802], 99.90th=[34341], 99.95th=[34341], 00:35:10.912 | 99.99th=[38536] 00:35:10.912 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2377.79, stdev=129.77, samples=19 00:35:10.912 iops : min= 544, max= 640, avg=594.42, stdev=32.40, samples=19 00:35:10.912 lat (msec) : 20=0.03%, 50=99.97% 00:35:10.912 cpu : usr=98.33%, sys=1.12%, ctx=68, majf=0, minf=9 00:35:10.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename1: (groupid=0, jobs=1): err= 0: pid=3271187: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=593, BW=2374KiB/s (2431kB/s)(23.2MiB/10001msec) 00:35:10.912 slat (usec): min=5, max=245, avg=53.87, stdev=23.85 00:35:10.912 clat (usec): min=11237, max=50238, avg=26483.02, stdev=2139.14 00:35:10.912 lat (usec): min=11252, max=50255, avg=26536.89, stdev=2141.23 00:35:10.912 clat percentiles (usec): 00:35:10.912 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:10.912 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:35:10.912 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:35:10.912 | 99.00th=[30540], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:35:10.912 | 99.99th=[50070] 00:35:10.912 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2378.11, stdev=134.84, samples=19 00:35:10.912 iops : min= 544, max= 640, avg=594.53, stdev=33.71, samples=19 00:35:10.912 lat (msec) : 20=0.27%, 50=99.70%, 100=0.03% 00:35:10.912 cpu : usr=98.56%, sys=0.97%, ctx=43, majf=0, minf=9 00:35:10.912 IO depths : 1=3.2%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename1: (groupid=0, jobs=1): err= 0: pid=3271188: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=591, BW=2366KiB/s (2422kB/s)(23.2MiB/10042msec) 00:35:10.912 slat (usec): min=7, max=112, avg=56.90, stdev=20.08 00:35:10.912 clat (usec): min=11130, max=52758, avg=26466.46, stdev=2183.57 00:35:10.912 lat (usec): min=11196, max=52811, avg=26523.36, stdev=2185.64 00:35:10.912 clat percentiles (usec): 00:35:10.912 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:35:10.912 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:35:10.912 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:35:10.912 | 99.00th=[30540], 99.50th=[30802], 99.90th=[40633], 99.95th=[41681], 00:35:10.912 | 99.99th=[52691] 00:35:10.912 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2373.80, stdev=134.77, samples=20 00:35:10.912 iops : min= 544, max= 640, avg=593.45, stdev=33.69, samples=20 00:35:10.912 lat (msec) : 20=0.24%, 50=99.73%, 100=0.03% 00:35:10.912 cpu : usr=97.53%, sys=1.43%, ctx=227, majf=0, minf=9 00:35:10.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename1: (groupid=0, jobs=1): err= 0: pid=3271189: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=593, BW=2375KiB/s (2432kB/s)(23.2MiB/10018msec) 00:35:10.912 slat (usec): min=9, max=117, avg=42.45, stdev=19.79 00:35:10.912 clat (usec): min=12750, max=33928, avg=26544.84, stdev=1989.29 00:35:10.912 lat (usec): min=12760, max=33953, avg=26587.29, stdev=1992.44 00:35:10.912 clat percentiles (usec): 00:35:10.912 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:10.912 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.912 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.912 | 99.00th=[30540], 99.50th=[30802], 99.90th=[33817], 99.95th=[33817], 00:35:10.912 | 99.99th=[33817] 00:35:10.912 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2384.26, stdev=121.68, samples=19 00:35:10.912 iops : min= 544, max= 640, avg=596.00, stdev=30.37, samples=19 00:35:10.912 lat (msec) : 20=0.47%, 50=99.53% 00:35:10.912 cpu : usr=98.29%, sys=1.03%, ctx=142, majf=0, minf=10 00:35:10.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename1: (groupid=0, jobs=1): err= 0: pid=3271190: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=593, BW=2375KiB/s (2432kB/s)(23.2MiB/10023msec) 00:35:10.912 slat (nsec): min=7519, max=79006, avg=34092.18, stdev=20027.86 00:35:10.912 clat (usec): min=8623, max=39637, avg=26657.12, stdev=1985.83 00:35:10.912 lat (usec): min=8632, max=39669, avg=26691.21, stdev=1988.31 00:35:10.912 clat percentiles (usec): 00:35:10.912 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[25035], 00:35:10.912 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:35:10.912 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.912 | 99.00th=[30540], 99.50th=[30802], 99.90th=[33162], 99.95th=[33162], 00:35:10.912 | 99.99th=[39584] 00:35:10.912 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2374.15, stdev=134.30, samples=20 00:35:10.912 iops : min= 544, max= 640, avg=593.50, stdev=33.56, samples=20 00:35:10.912 lat (msec) : 10=0.03%, 20=0.24%, 50=99.73% 00:35:10.912 cpu : usr=98.84%, sys=0.78%, ctx=18, majf=0, minf=9 00:35:10.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename2: (groupid=0, jobs=1): err= 0: pid=3271191: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=593, BW=2373KiB/s (2429kB/s)(23.2MiB/10008msec) 00:35:10.912 slat (usec): min=7, max=114, avg=39.58, stdev=25.18 00:35:10.912 clat (usec): min=20537, max=39910, avg=26684.90, stdev=1906.27 00:35:10.912 lat (usec): min=20554, max=39930, avg=26724.48, stdev=1906.30 00:35:10.912 clat percentiles (usec): 00:35:10.912 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:35:10.912 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:35:10.912 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.912 | 99.00th=[30540], 99.50th=[31065], 99.90th=[35390], 99.95th=[35914], 00:35:10.912 | 99.99th=[40109] 00:35:10.912 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2377.79, stdev=129.77, samples=19 00:35:10.912 iops : min= 544, max= 640, avg=594.42, stdev=32.40, samples=19 00:35:10.912 lat (msec) : 50=100.00% 00:35:10.912 cpu : usr=98.57%, sys=0.88%, ctx=75, majf=0, minf=9 00:35:10.912 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.912 issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.912 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.912 filename2: (groupid=0, jobs=1): err= 0: pid=3271192: Fri Dec 6 15:52:15 2024 00:35:10.912 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10003msec) 00:35:10.912 slat (usec): min=7, max=109, avg=49.80, stdev=16.93 00:35:10.912 clat (usec): min=2765, max=40456, avg=26504.85, stdev=2301.72 00:35:10.913 lat (usec): min=2806, max=40477, avg=26554.64, stdev=2303.76 00:35:10.913 clat percentiles (usec): 00:35:10.913 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:35:10.913 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.913 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29754], 95.00th=[30016], 00:35:10.913 | 99.00th=[30540], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:35:10.913 | 99.99th=[40633] 00:35:10.913 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2378.11, stdev=137.04, samples=19 00:35:10.913 iops : min= 544, max= 640, avg=594.53, stdev=34.26, samples=19 00:35:10.913 lat (msec) : 4=0.15%, 20=0.27%, 50=99.58% 00:35:10.913 cpu : usr=98.57%, sys=1.04%, ctx=44, majf=0, minf=9 00:35:10.913 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 issued rwts: total=5945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.913 filename2: (groupid=0, jobs=1): err= 0: pid=3271193: Fri Dec 6 15:52:15 2024 00:35:10.913 read: IOPS=599, BW=2400KiB/s (2458kB/s)(23.5MiB/10027msec) 00:35:10.913 slat (usec): min=6, max=117, avg=14.95, stdev=10.86 00:35:10.913 clat (usec): min=1947, max=31158, avg=26549.67, stdev=2917.34 00:35:10.913 lat (usec): min=1970, max=31175, avg=26564.62, stdev=2917.14 00:35:10.913 clat percentiles (usec): 00:35:10.913 | 1.00th=[12256], 5.00th=[24249], 10.00th=[24773], 20.00th=[25035], 00:35:10.913 | 30.00th=[25297], 40.00th=[26608], 50.00th=[26608], 60.00th=[26870], 00:35:10.913 | 70.00th=[27395], 80.00th=[28443], 90.00th=[30016], 95.00th=[30540], 00:35:10.913 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:35:10.913 | 99.99th=[31065] 00:35:10.913 bw ( KiB/s): min= 2176, max= 3072, per=4.22%, avg=2399.70, stdev=202.67, samples=20 00:35:10.913 iops : min= 544, max= 768, avg=599.90, stdev=50.65, samples=20 00:35:10.913 lat (msec) : 2=0.05%, 4=0.22%, 10=0.27%, 20=1.33%, 50=98.14% 00:35:10.913 cpu : usr=98.02%, sys=1.31%, ctx=92, majf=0, minf=9 00:35:10.913 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.913 filename2: (groupid=0, jobs=1): err= 0: pid=3271194: Fri Dec 6 15:52:15 2024 00:35:10.913 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10003msec) 00:35:10.913 slat (nsec): min=7828, max=95208, avg=50765.66, stdev=17564.28 00:35:10.913 clat (usec): min=2147, max=40627, avg=26492.80, stdev=2295.15 00:35:10.913 lat (usec): min=2175, max=40643, avg=26543.57, stdev=2296.84 00:35:10.913 clat percentiles (usec): 00:35:10.913 | 1.00th=[22676], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:10.913 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.913 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:35:10.913 | 99.00th=[30540], 99.50th=[30802], 99.90th=[40633], 99.95th=[40633], 00:35:10.913 | 99.99th=[40633] 00:35:10.913 bw ( KiB/s): min= 2176, max= 2565, per=4.18%, avg=2378.37, stdev=137.41, samples=19 00:35:10.913 iops : min= 544, max= 641, avg=594.58, stdev=34.33, samples=19 00:35:10.913 lat (msec) : 4=0.13%, 20=0.27%, 50=99.60% 00:35:10.913 cpu : usr=98.66%, sys=0.89%, ctx=37, majf=0, minf=9 00:35:10.913 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=49.9%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 issued rwts: total=5944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.913 filename2: (groupid=0, jobs=1): err= 0: pid=3271195: Fri Dec 6 15:52:15 2024 00:35:10.913 read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10018msec) 00:35:10.913 slat (usec): min=6, max=113, avg=41.10, stdev=20.91 00:35:10.913 clat (usec): min=15760, max=33664, avg=26520.10, stdev=1993.74 00:35:10.913 lat (usec): min=15769, max=33703, avg=26561.21, stdev=1997.23 00:35:10.913 clat percentiles (usec): 00:35:10.913 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:10.913 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.913 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29754], 95.00th=[30016], 00:35:10.913 | 99.00th=[30540], 99.50th=[30802], 99.90th=[33817], 99.95th=[33817], 00:35:10.913 | 99.99th=[33817] 00:35:10.913 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2384.58, stdev=122.16, samples=19 00:35:10.913 iops : min= 544, max= 640, avg=596.11, stdev=30.52, samples=19 00:35:10.913 lat (msec) : 20=0.54%, 50=99.46% 00:35:10.913 cpu : usr=98.49%, sys=1.09%, ctx=13, majf=0, minf=10 00:35:10.913 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.913 filename2: (groupid=0, jobs=1): err= 0: pid=3271196: Fri Dec 6 15:52:15 2024 00:35:10.913 read: IOPS=594, BW=2378KiB/s (2436kB/s)(23.2MiB/10010msec) 00:35:10.913 slat (nsec): min=6927, max=83686, avg=36580.28, stdev=13726.47 00:35:10.913 clat (usec): min=12042, max=35901, avg=26598.00, stdev=2101.07 00:35:10.913 lat (usec): min=12104, max=35930, avg=26634.58, stdev=2101.73 00:35:10.913 clat percentiles (usec): 00:35:10.913 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:10.913 | 30.00th=[25297], 40.00th=[26346], 50.00th=[26608], 60.00th=[26608], 00:35:10.913 | 70.00th=[27395], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:35:10.913 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:35:10.913 | 99.99th=[35914] 00:35:10.913 bw ( KiB/s): min= 2176, max= 2688, per=4.19%, avg=2384.53, stdev=142.45, samples=19 00:35:10.913 iops : min= 544, max= 672, avg=596.11, stdev=35.58, samples=19 00:35:10.913 lat (msec) : 20=0.57%, 50=99.43% 00:35:10.913 cpu : usr=98.66%, sys=0.91%, ctx=45, majf=0, minf=9 00:35:10.913 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.913 filename2: (groupid=0, jobs=1): err= 0: pid=3271197: Fri Dec 6 15:52:15 2024 00:35:10.913 read: IOPS=591, BW=2366KiB/s (2422kB/s)(23.2MiB/10044msec) 00:35:10.913 slat (usec): min=7, max=112, avg=55.70, stdev=21.80 00:35:10.913 clat (usec): min=11111, max=44628, avg=26442.36, stdev=2181.02 00:35:10.913 lat (usec): min=11127, max=44645, avg=26498.06, stdev=2183.65 00:35:10.913 clat percentiles (usec): 00:35:10.913 | 1.00th=[22676], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:35:10.913 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:35:10.913 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29492], 95.00th=[30016], 00:35:10.913 | 99.00th=[30540], 99.50th=[30802], 99.90th=[43254], 99.95th=[43779], 00:35:10.913 | 99.99th=[44827] 00:35:10.913 bw ( KiB/s): min= 2176, max= 2565, per=4.17%, avg=2374.85, stdev=134.67, samples=20 00:35:10.913 iops : min= 544, max= 641, avg=593.70, stdev=33.65, samples=20 00:35:10.913 lat (msec) : 20=0.27%, 50=99.73% 00:35:10.913 cpu : usr=98.46%, sys=1.03%, ctx=57, majf=0, minf=9 00:35:10.913 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:10.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 issued rwts: total=5940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.913 filename2: (groupid=0, jobs=1): err= 0: pid=3271198: Fri Dec 6 15:52:15 2024 00:35:10.913 read: IOPS=593, BW=2375KiB/s (2432kB/s)(23.2MiB/10023msec) 00:35:10.913 slat (nsec): min=7009, max=79514, avg=38992.79, stdev=18978.80 00:35:10.913 clat (usec): min=14543, max=33597, avg=26557.85, stdev=1945.51 00:35:10.913 lat (usec): min=14569, max=33642, avg=26596.84, stdev=1951.07 00:35:10.913 clat percentiles (usec): 00:35:10.913 | 1.00th=[22938], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:35:10.913 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:35:10.913 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29754], 95.00th=[30016], 00:35:10.913 | 99.00th=[30540], 99.50th=[30802], 99.90th=[33424], 99.95th=[33424], 00:35:10.913 | 99.99th=[33817] 00:35:10.913 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2374.15, stdev=134.30, samples=20 00:35:10.913 iops : min= 544, max= 640, avg=593.50, stdev=33.56, samples=20 00:35:10.913 lat (msec) : 20=0.27%, 50=99.73% 00:35:10.913 cpu : usr=98.32%, sys=1.12%, ctx=112, majf=0, minf=9 00:35:10.913 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:10.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.913 issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.913 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.913 00:35:10.913 Run status group 0 (all jobs): 00:35:10.913 READ: bw=55.5MiB/s (58.2MB/s), 2366KiB/s-2402KiB/s (2422kB/s-2460kB/s), io=558MiB (585MB), run=10001-10044msec 00:35:10.913 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:10.913 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:10.913 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.913 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.913 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 bdev_null0 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 [2024-12-06 15:52:15.735217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 bdev_null1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.914 { 00:35:10.914 "params": { 00:35:10.914 "name": "Nvme$subsystem", 00:35:10.914 "trtype": "$TEST_TRANSPORT", 00:35:10.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.914 "adrfam": "ipv4", 00:35:10.914 "trsvcid": "$NVMF_PORT", 00:35:10.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.914 "hdgst": ${hdgst:-false}, 00:35:10.914 "ddgst": ${ddgst:-false} 00:35:10.914 }, 00:35:10.914 "method": "bdev_nvme_attach_controller" 00:35:10.914 } 00:35:10.914 EOF 00:35:10.914 )") 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:10.914 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:10.914 { 00:35:10.914 "params": { 00:35:10.914 "name": "Nvme$subsystem", 00:35:10.914 "trtype": "$TEST_TRANSPORT", 00:35:10.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.914 "adrfam": "ipv4", 00:35:10.914 "trsvcid": "$NVMF_PORT", 00:35:10.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.915 "hdgst": ${hdgst:-false}, 00:35:10.915 "ddgst": ${ddgst:-false} 00:35:10.915 }, 00:35:10.915 "method": "bdev_nvme_attach_controller" 00:35:10.915 } 00:35:10.915 EOF 00:35:10.915 )") 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:10.915 "params": { 00:35:10.915 "name": "Nvme0", 00:35:10.915 "trtype": "tcp", 00:35:10.915 "traddr": "10.0.0.2", 00:35:10.915 "adrfam": "ipv4", 00:35:10.915 "trsvcid": "4420", 00:35:10.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.915 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.915 "hdgst": false, 00:35:10.915 "ddgst": false 00:35:10.915 }, 00:35:10.915 "method": "bdev_nvme_attach_controller" 00:35:10.915 },{ 00:35:10.915 "params": { 00:35:10.915 "name": "Nvme1", 00:35:10.915 "trtype": "tcp", 00:35:10.915 "traddr": "10.0.0.2", 00:35:10.915 "adrfam": "ipv4", 00:35:10.915 "trsvcid": "4420", 00:35:10.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:10.915 "hdgst": false, 00:35:10.915 "ddgst": false 00:35:10.915 }, 00:35:10.915 "method": "bdev_nvme_attach_controller" 00:35:10.915 }' 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.915 15:52:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.915 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.915 ... 00:35:10.915 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.915 ... 00:35:10.915 fio-3.35 00:35:10.915 Starting 4 threads 00:35:16.177 00:35:16.177 filename0: (groupid=0, jobs=1): err= 0: pid=3273144: Fri Dec 6 15:52:21 2024 00:35:16.177 read: IOPS=2640, BW=20.6MiB/s (21.6MB/s)(103MiB/5002msec) 00:35:16.177 slat (nsec): min=6009, max=72107, avg=21854.88, stdev=14185.15 00:35:16.177 clat (usec): min=647, max=5428, avg=2942.27, stdev=417.08 00:35:16.177 lat (usec): min=657, max=5435, avg=2964.13, stdev=419.15 00:35:16.177 clat percentiles (usec): 00:35:16.177 | 1.00th=[ 1745], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2704], 00:35:16.177 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:35:16.177 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3294], 95.00th=[ 3523], 00:35:16.177 | 99.00th=[ 4293], 99.50th=[ 4686], 99.90th=[ 5014], 99.95th=[ 5276], 00:35:16.177 | 99.99th=[ 5407] 00:35:16.177 bw ( KiB/s): min=20512, max=22112, per=25.53%, avg=21126.40, stdev=514.73, samples=10 00:35:16.177 iops : min= 2564, max= 2764, avg=2640.80, stdev=64.34, samples=10 00:35:16.177 lat (usec) : 750=0.03%, 1000=0.17% 00:35:16.177 lat (msec) : 2=1.85%, 4=96.12%, 10=1.82% 00:35:16.177 cpu : usr=97.56%, sys=2.04%, ctx=11, majf=0, minf=9 00:35:16.177 IO depths : 1=1.7%, 2=18.3%, 4=55.3%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 issued rwts: total=13209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.177 filename0: (groupid=0, jobs=1): err= 0: pid=3273145: Fri Dec 6 15:52:21 2024 00:35:16.177 read: IOPS=2525, BW=19.7MiB/s (20.7MB/s)(98.7MiB/5001msec) 00:35:16.177 slat (nsec): min=5972, max=62180, avg=15715.14, stdev=10331.24 00:35:16.177 clat (usec): min=595, max=6243, avg=3118.95, stdev=450.87 00:35:16.177 lat (usec): min=601, max=6261, avg=3134.67, stdev=450.79 00:35:16.177 clat percentiles (usec): 00:35:16.177 | 1.00th=[ 1942], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2900], 00:35:16.177 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3130], 00:35:16.177 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3589], 95.00th=[ 3949], 00:35:16.177 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5473], 99.95th=[ 5669], 00:35:16.177 | 99.99th=[ 5932] 00:35:16.177 bw ( KiB/s): min=18613, max=20656, per=24.33%, avg=20132.11, stdev=646.32, samples=9 00:35:16.177 iops : min= 2326, max= 2582, avg=2516.44, stdev=80.97, samples=9 00:35:16.177 lat (usec) : 750=0.03%, 1000=0.07% 00:35:16.177 lat (msec) : 2=1.00%, 4=94.39%, 10=4.51% 00:35:16.177 cpu : usr=96.34%, sys=3.34%, ctx=9, majf=0, minf=9 00:35:16.177 IO depths : 1=0.2%, 2=8.1%, 4=64.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 issued rwts: total=12631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.177 filename1: (groupid=0, jobs=1): err= 0: pid=3273146: Fri Dec 6 15:52:21 2024 00:35:16.177 read: IOPS=2607, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:35:16.177 slat (nsec): min=5964, max=59318, avg=15290.11, stdev=9953.06 00:35:16.177 clat (usec): min=684, max=5830, avg=3016.13, stdev=399.96 00:35:16.177 lat (usec): min=690, max=5837, avg=3031.42, stdev=400.97 00:35:16.177 clat percentiles (usec): 00:35:16.177 | 1.00th=[ 1893], 5.00th=[ 2376], 10.00th=[ 2573], 20.00th=[ 2802], 00:35:16.177 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:16.177 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3589], 00:35:16.177 | 99.00th=[ 4359], 99.50th=[ 4752], 99.90th=[ 5145], 99.95th=[ 5276], 00:35:16.177 | 99.99th=[ 5800] 00:35:16.177 bw ( KiB/s): min=20279, max=21440, per=25.19%, avg=20848.78, stdev=362.42, samples=9 00:35:16.177 iops : min= 2534, max= 2680, avg=2606.00, stdev=45.48, samples=9 00:35:16.177 lat (usec) : 750=0.02%, 1000=0.11% 00:35:16.177 lat (msec) : 2=1.14%, 4=96.75%, 10=1.98% 00:35:16.177 cpu : usr=96.74%, sys=2.90%, ctx=11, majf=0, minf=9 00:35:16.177 IO depths : 1=0.7%, 2=13.3%, 4=59.4%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 issued rwts: total=13044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.177 filename1: (groupid=0, jobs=1): err= 0: pid=3273147: Fri Dec 6 15:52:21 2024 00:35:16.177 read: IOPS=2570, BW=20.1MiB/s (21.1MB/s)(100MiB/5001msec) 00:35:16.177 slat (nsec): min=5962, max=62163, avg=14725.45, stdev=9923.12 00:35:16.177 clat (usec): min=682, max=6440, avg=3064.81, stdev=411.99 00:35:16.177 lat (usec): min=702, max=6464, avg=3079.53, stdev=412.46 00:35:16.177 clat percentiles (usec): 00:35:16.177 | 1.00th=[ 1909], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2868], 00:35:16.177 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3097], 00:35:16.177 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 3720], 00:35:16.177 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5473], 99.95th=[ 5997], 00:35:16.177 | 99.99th=[ 6128] 00:35:16.177 bw ( KiB/s): min=19440, max=21232, per=24.84%, avg=20551.11, stdev=535.77, samples=9 00:35:16.177 iops : min= 2430, max= 2654, avg=2568.89, stdev=66.97, samples=9 00:35:16.177 lat (usec) : 750=0.03%, 1000=0.02% 00:35:16.177 lat (msec) : 2=1.19%, 4=95.60%, 10=3.15% 00:35:16.177 cpu : usr=96.70%, sys=2.96%, ctx=9, majf=0, minf=9 00:35:16.177 IO depths : 1=0.4%, 2=10.2%, 4=62.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:16.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:16.177 issued rwts: total=12855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:16.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:16.177 00:35:16.177 Run status group 0 (all jobs): 00:35:16.177 READ: bw=80.8MiB/s (84.7MB/s), 19.7MiB/s-20.6MiB/s (20.7MB/s-21.6MB/s), io=404MiB (424MB), run=5001-5002msec 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 00:35:16.437 real 0m24.594s 00:35:16.437 user 4m51.789s 00:35:16.437 sys 0m4.935s 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 ************************************ 00:35:16.437 END TEST fio_dif_rand_params 00:35:16.437 ************************************ 00:35:16.437 15:52:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:16.437 15:52:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:16.437 15:52:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 ************************************ 00:35:16.437 START TEST fio_dif_digest 00:35:16.437 ************************************ 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 bdev_null0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:16.437 [2024-12-06 15:52:22.366481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:16.437 { 00:35:16.437 "params": { 00:35:16.437 "name": "Nvme$subsystem", 00:35:16.437 "trtype": "$TEST_TRANSPORT", 00:35:16.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.437 "adrfam": "ipv4", 00:35:16.437 "trsvcid": "$NVMF_PORT", 00:35:16.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.437 "hdgst": ${hdgst:-false}, 00:35:16.437 "ddgst": ${ddgst:-false} 00:35:16.437 }, 00:35:16.437 "method": "bdev_nvme_attach_controller" 00:35:16.437 } 00:35:16.437 EOF 00:35:16.437 )") 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:16.437 "params": { 00:35:16.437 "name": "Nvme0", 00:35:16.437 "trtype": "tcp", 00:35:16.437 "traddr": "10.0.0.2", 00:35:16.437 "adrfam": "ipv4", 00:35:16.437 "trsvcid": "4420", 00:35:16.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:16.437 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:16.437 "hdgst": true, 00:35:16.437 "ddgst": true 00:35:16.437 }, 00:35:16.437 "method": "bdev_nvme_attach_controller" 00:35:16.437 }' 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.437 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:16.438 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:16.718 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:16.718 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:16.718 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:16.718 15:52:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.979 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:16.979 ... 00:35:16.979 fio-3.35 00:35:16.979 Starting 3 threads 00:35:29.173 00:35:29.173 filename0: (groupid=0, jobs=1): err= 0: pid=3274210: Fri Dec 6 15:52:33 2024 00:35:29.173 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(357MiB/10010msec) 00:35:29.173 slat (nsec): min=6331, max=31994, avg=11886.01, stdev=1761.58 00:35:29.173 clat (usec): min=7235, max=13503, avg=10490.88, stdev=756.59 00:35:29.173 lat (usec): min=7248, max=13515, avg=10502.77, stdev=756.54 00:35:29.173 clat percentiles (usec): 00:35:29.173 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9896], 00:35:29.173 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:35:29.173 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:35:29.173 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13304], 99.95th=[13435], 00:35:29.173 | 99.99th=[13566] 00:35:29.173 bw ( KiB/s): min=35328, max=37632, per=34.87%, avg=36556.80, stdev=619.32, samples=20 00:35:29.173 iops : min= 276, max= 294, avg=285.60, stdev= 4.84, samples=20 00:35:29.173 lat (msec) : 10=24.84%, 20=75.16% 00:35:29.173 cpu : usr=94.39%, sys=5.32%, ctx=16, majf=0, minf=11 00:35:29.173 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.173 issued rwts: total=2858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.173 filename0: (groupid=0, jobs=1): err= 0: pid=3274212: Fri Dec 6 15:52:33 2024 00:35:29.173 read: IOPS=277, BW=34.7MiB/s (36.4MB/s)(347MiB/10004msec) 00:35:29.173 slat (nsec): min=6370, max=27821, avg=11852.95, stdev=1902.25 00:35:29.173 clat (usec): min=6502, max=13981, avg=10797.72, stdev=795.57 00:35:29.173 lat (usec): min=6517, max=13995, avg=10809.57, stdev=795.49 00:35:29.173 clat percentiles (usec): 00:35:29.173 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:35:29.173 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:35:29.173 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:35:29.173 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13173], 99.95th=[13698], 00:35:29.173 | 99.99th=[13960] 00:35:29.173 bw ( KiB/s): min=34304, max=36864, per=33.92%, avg=35557.05, stdev=671.34, samples=19 00:35:29.173 iops : min= 268, max= 288, avg=277.79, stdev= 5.24, samples=19 00:35:29.173 lat (msec) : 10=13.47%, 20=86.53% 00:35:29.173 cpu : usr=94.63%, sys=5.08%, ctx=18, majf=0, minf=9 00:35:29.173 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.173 issued rwts: total=2776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.173 filename0: (groupid=0, jobs=1): err= 0: pid=3274213: Fri Dec 6 15:52:33 2024 00:35:29.173 read: IOPS=258, BW=32.3MiB/s (33.8MB/s)(324MiB/10045msec) 00:35:29.173 slat (nsec): min=6366, max=60557, avg=11838.73, stdev=1856.46 00:35:29.173 clat (usec): min=8817, max=53025, avg=11596.61, stdev=1882.92 00:35:29.173 lat (usec): min=8830, max=53050, avg=11608.45, stdev=1883.02 00:35:29.173 clat percentiles (usec): 00:35:29.173 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:35:29.173 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:35:29.173 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:35:29.173 | 99.00th=[13566], 99.50th=[14091], 99.90th=[53216], 99.95th=[53216], 00:35:29.173 | 99.99th=[53216] 00:35:29.173 bw ( KiB/s): min=29696, max=34304, per=31.63%, avg=33152.00, stdev=959.66, samples=20 00:35:29.173 iops : min= 232, max= 268, avg=259.00, stdev= 7.50, samples=20 00:35:29.173 lat (msec) : 10=2.39%, 20=97.42%, 50=0.08%, 100=0.12% 00:35:29.173 cpu : usr=94.34%, sys=5.35%, ctx=21, majf=0, minf=10 00:35:29.173 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.173 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.173 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:29.173 00:35:29.173 Run status group 0 (all jobs): 00:35:29.173 READ: bw=102MiB/s (107MB/s), 32.3MiB/s-35.7MiB/s (33.8MB/s-37.4MB/s), io=1028MiB (1078MB), run=10004-10045msec 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.173 00:35:29.173 real 0m11.431s 00:35:29.173 user 0m35.694s 00:35:29.173 sys 0m1.963s 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.173 15:52:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.173 ************************************ 00:35:29.173 END TEST fio_dif_digest 00:35:29.173 ************************************ 00:35:29.173 15:52:33 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:29.173 15:52:33 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:29.173 rmmod nvme_tcp 00:35:29.173 rmmod nvme_fabrics 00:35:29.173 rmmod nvme_keyring 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3265824 ']' 00:35:29.173 15:52:33 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3265824 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3265824 ']' 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3265824 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3265824 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3265824' 00:35:29.173 killing process with pid 3265824 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3265824 00:35:29.173 15:52:33 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3265824 00:35:29.173 15:52:34 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:29.173 15:52:34 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:31.076 Waiting for block devices as requested 00:35:31.076 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:31.076 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.076 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.333 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.333 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.334 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:31.334 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:31.592 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:31.592 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:31.592 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:31.868 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:31.868 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:31.868 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:31.868 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:32.126 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:32.126 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:32.126 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.386 15:52:38 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.386 15:52:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.386 15:52:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.338 15:52:40 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.338 00:35:34.338 real 1m14.607s 00:35:34.338 user 7m9.644s 00:35:34.338 sys 0m20.887s 00:35:34.338 15:52:40 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.338 15:52:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:34.338 ************************************ 00:35:34.338 END TEST nvmf_dif 00:35:34.338 ************************************ 00:35:34.338 15:52:40 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:34.338 15:52:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.338 15:52:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.338 15:52:40 -- common/autotest_common.sh@10 -- # set +x 00:35:34.338 ************************************ 00:35:34.338 START TEST nvmf_abort_qd_sizes 00:35:34.338 ************************************ 00:35:34.338 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:34.598 * Looking for test storage... 00:35:34.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.598 --rc genhtml_branch_coverage=1 00:35:34.598 --rc genhtml_function_coverage=1 00:35:34.598 --rc genhtml_legend=1 00:35:34.598 --rc geninfo_all_blocks=1 00:35:34.598 --rc geninfo_unexecuted_blocks=1 00:35:34.598 00:35:34.598 ' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.598 --rc genhtml_branch_coverage=1 00:35:34.598 --rc genhtml_function_coverage=1 00:35:34.598 --rc genhtml_legend=1 00:35:34.598 --rc geninfo_all_blocks=1 00:35:34.598 --rc geninfo_unexecuted_blocks=1 00:35:34.598 00:35:34.598 ' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.598 --rc genhtml_branch_coverage=1 00:35:34.598 --rc genhtml_function_coverage=1 00:35:34.598 --rc genhtml_legend=1 00:35:34.598 --rc geninfo_all_blocks=1 00:35:34.598 --rc geninfo_unexecuted_blocks=1 00:35:34.598 00:35:34.598 ' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:34.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.598 --rc genhtml_branch_coverage=1 00:35:34.598 --rc genhtml_function_coverage=1 00:35:34.598 --rc genhtml_legend=1 00:35:34.598 --rc geninfo_all_blocks=1 00:35:34.598 --rc geninfo_unexecuted_blocks=1 00:35:34.598 00:35:34.598 ' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.598 15:52:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:34.599 15:52:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:41.163 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.163 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:41.163 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:41.163 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:41.163 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:41.163 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:41.164 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:41.164 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:41.164 Found net devices under 0000:86:00.0: cvl_0_0 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:41.164 Found net devices under 0000:86:00.1: cvl_0_1 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:41.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:35:41.164 00:35:41.164 --- 10.0.0.2 ping statistics --- 00:35:41.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.164 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:35:41.164 00:35:41.164 --- 10.0.0.1 ping statistics --- 00:35:41.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.164 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:41.164 15:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:43.067 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:43.067 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:43.325 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:43.326 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:43.326 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:44.700 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:44.700 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:44.700 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:44.700 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:44.700 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:44.700 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:44.700 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3282225 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3282225 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3282225 ']' 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.959 15:52:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:44.959 [2024-12-06 15:52:50.764858] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:35:44.959 [2024-12-06 15:52:50.764916] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:44.959 [2024-12-06 15:52:50.845843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:44.959 [2024-12-06 15:52:50.889264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:44.959 [2024-12-06 15:52:50.889303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:44.959 [2024-12-06 15:52:50.889310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:44.959 [2024-12-06 15:52:50.889316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:44.959 [2024-12-06 15:52:50.889321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:44.959 [2024-12-06 15:52:50.890914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.959 [2024-12-06 15:52:50.891020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:44.959 [2024-12-06 15:52:50.891124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.959 [2024-12-06 15:52:50.891125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.917 15:52:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.917 ************************************ 00:35:45.917 START TEST spdk_target_abort 00:35:45.917 ************************************ 00:35:45.917 15:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:45.917 15:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:45.917 15:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:45.917 15:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.917 15:52:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.186 spdk_targetn1 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.186 [2024-12-06 15:52:54.516745] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.186 [2024-12-06 15:52:54.573055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.186 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:49.187 15:52:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:52.459 Initializing NVMe Controllers 00:35:52.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:52.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:52.459 Initialization complete. Launching workers. 00:35:52.459 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14640, failed: 0 00:35:52.459 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 13399 00:35:52.459 success 679, unsuccessful 562, failed 0 00:35:52.459 15:52:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:52.459 15:52:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:55.772 Initializing NVMe Controllers 00:35:55.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:55.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:55.772 Initialization complete. Launching workers. 00:35:55.772 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8554, failed: 0 00:35:55.772 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7309 00:35:55.772 success 324, unsuccessful 921, failed 0 00:35:55.772 15:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:55.773 15:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:58.406 Initializing NVMe Controllers 00:35:58.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:58.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:58.406 Initialization complete. Launching workers. 00:35:58.406 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38682, failed: 0 00:35:58.406 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2767, failed to submit 35915 00:35:58.406 success 596, unsuccessful 2171, failed 0 00:35:58.406 15:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:58.406 15:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.406 15:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.406 15:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.406 15:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:58.406 15:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.406 15:53:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3282225 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3282225 ']' 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3282225 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282225 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282225' 00:36:00.297 killing process with pid 3282225 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3282225 00:36:00.297 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3282225 00:36:00.554 00:36:00.554 real 0m14.756s 00:36:00.554 user 0m58.715s 00:36:00.554 sys 0m2.675s 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:00.554 ************************************ 00:36:00.554 END TEST spdk_target_abort 00:36:00.554 ************************************ 00:36:00.554 15:53:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:00.554 15:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:00.554 15:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.554 15:53:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:00.554 ************************************ 00:36:00.554 START TEST kernel_target_abort 00:36:00.554 ************************************ 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:00.554 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:00.555 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:00.555 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:00.555 15:53:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:03.846 Waiting for block devices as requested 00:36:03.846 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:03.846 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:03.846 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:03.846 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:03.846 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:03.846 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:03.846 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:03.846 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:04.106 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:04.106 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:04.106 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:04.365 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:04.365 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:04.365 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:04.624 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:04.624 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:04.624 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:04.624 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:04.884 No valid GPT data, bailing 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:04.884 00:36:04.884 Discovery Log Number of Records 2, Generation counter 2 00:36:04.884 =====Discovery Log Entry 0====== 00:36:04.884 trtype: tcp 00:36:04.884 adrfam: ipv4 00:36:04.884 subtype: current discovery subsystem 00:36:04.884 treq: not specified, sq flow control disable supported 00:36:04.884 portid: 1 00:36:04.884 trsvcid: 4420 00:36:04.884 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:04.884 traddr: 10.0.0.1 00:36:04.884 eflags: none 00:36:04.884 sectype: none 00:36:04.884 =====Discovery Log Entry 1====== 00:36:04.884 trtype: tcp 00:36:04.884 adrfam: ipv4 00:36:04.884 subtype: nvme subsystem 00:36:04.884 treq: not specified, sq flow control disable supported 00:36:04.884 portid: 1 00:36:04.884 trsvcid: 4420 00:36:04.884 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:04.884 traddr: 10.0.0.1 00:36:04.884 eflags: none 00:36:04.884 sectype: none 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.884 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:04.885 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:04.885 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.885 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:04.885 15:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:08.188 Initializing NVMe Controllers 00:36:08.188 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:08.188 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:08.188 Initialization complete. Launching workers. 00:36:08.188 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94797, failed: 0 00:36:08.188 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94797, failed to submit 0 00:36:08.188 success 0, unsuccessful 94797, failed 0 00:36:08.188 15:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:08.188 15:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.468 Initializing NVMe Controllers 00:36:11.468 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:11.468 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:11.468 Initialization complete. Launching workers. 00:36:11.468 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150312, failed: 0 00:36:11.468 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37914, failed to submit 112398 00:36:11.468 success 0, unsuccessful 37914, failed 0 00:36:11.468 15:53:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:11.468 15:53:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:14.755 Initializing NVMe Controllers 00:36:14.755 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:14.755 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:14.755 Initialization complete. Launching workers. 00:36:14.755 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 142460, failed: 0 00:36:14.755 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35674, failed to submit 106786 00:36:14.755 success 0, unsuccessful 35674, failed 0 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:14.755 15:53:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:17.293 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:17.293 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:18.669 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:18.929 00:36:18.929 real 0m18.186s 00:36:18.929 user 0m9.196s 00:36:18.929 sys 0m5.068s 00:36:18.929 15:53:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.929 15:53:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.929 ************************************ 00:36:18.929 END TEST kernel_target_abort 00:36:18.929 ************************************ 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:18.929 rmmod nvme_tcp 00:36:18.929 rmmod nvme_fabrics 00:36:18.929 rmmod nvme_keyring 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3282225 ']' 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3282225 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3282225 ']' 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3282225 00:36:18.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3282225) - No such process 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3282225 is not found' 00:36:18.929 Process with pid 3282225 is not found 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:18.929 15:53:24 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:21.460 Waiting for block devices as requested 00:36:21.719 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:21.719 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:21.979 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:21.979 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:21.979 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:21.979 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:22.239 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:22.239 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:22.239 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:22.499 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:22.499 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:22.499 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:22.499 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:22.758 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:22.758 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:22.758 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:23.017 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.017 15:53:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.556 15:53:30 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:25.556 00:36:25.556 real 0m50.627s 00:36:25.556 user 1m12.341s 00:36:25.556 sys 0m16.487s 00:36:25.556 15:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.556 15:53:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:25.556 ************************************ 00:36:25.556 END TEST nvmf_abort_qd_sizes 00:36:25.556 ************************************ 00:36:25.556 15:53:30 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:25.556 15:53:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:25.556 15:53:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:25.556 15:53:30 -- common/autotest_common.sh@10 -- # set +x 00:36:25.556 ************************************ 00:36:25.556 START TEST keyring_file 00:36:25.556 ************************************ 00:36:25.556 15:53:31 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:25.556 * Looking for test storage... 00:36:25.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.557 --rc genhtml_branch_coverage=1 00:36:25.557 --rc genhtml_function_coverage=1 00:36:25.557 --rc genhtml_legend=1 00:36:25.557 --rc geninfo_all_blocks=1 00:36:25.557 --rc geninfo_unexecuted_blocks=1 00:36:25.557 00:36:25.557 ' 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.557 --rc genhtml_branch_coverage=1 00:36:25.557 --rc genhtml_function_coverage=1 00:36:25.557 --rc genhtml_legend=1 00:36:25.557 --rc geninfo_all_blocks=1 00:36:25.557 --rc geninfo_unexecuted_blocks=1 00:36:25.557 00:36:25.557 ' 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.557 --rc genhtml_branch_coverage=1 00:36:25.557 --rc genhtml_function_coverage=1 00:36:25.557 --rc genhtml_legend=1 00:36:25.557 --rc geninfo_all_blocks=1 00:36:25.557 --rc geninfo_unexecuted_blocks=1 00:36:25.557 00:36:25.557 ' 00:36:25.557 15:53:31 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:25.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.557 --rc genhtml_branch_coverage=1 00:36:25.557 --rc genhtml_function_coverage=1 00:36:25.557 --rc genhtml_legend=1 00:36:25.557 --rc geninfo_all_blocks=1 00:36:25.557 --rc geninfo_unexecuted_blocks=1 00:36:25.557 00:36:25.557 ' 00:36:25.557 15:53:31 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.557 15:53:31 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.557 15:53:31 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.557 15:53:31 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.557 15:53:31 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.557 15:53:31 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:25.557 15:53:31 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:25.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:25.557 15:53:31 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:25.557 15:53:31 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:25.557 15:53:31 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:25.557 15:53:31 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:25.557 15:53:31 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:25.557 15:53:31 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dsvhNAjlUP 00:36:25.557 15:53:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:25.557 15:53:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dsvhNAjlUP 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dsvhNAjlUP 00:36:25.558 15:53:31 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dsvhNAjlUP 00:36:25.558 15:53:31 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.d0UiMXJyjh 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:25.558 15:53:31 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d0UiMXJyjh 00:36:25.558 15:53:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.d0UiMXJyjh 00:36:25.558 15:53:31 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.d0UiMXJyjh 00:36:25.558 15:53:31 keyring_file -- keyring/file.sh@30 -- # tgtpid=3291244 00:36:25.558 15:53:31 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:25.558 15:53:31 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3291244 00:36:25.558 15:53:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3291244 ']' 00:36:25.558 15:53:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:25.558 15:53:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.558 15:53:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:25.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:25.558 15:53:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.558 15:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:25.558 [2024-12-06 15:53:31.376264] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:36:25.558 [2024-12-06 15:53:31.376315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291244 ] 00:36:25.558 [2024-12-06 15:53:31.451559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.558 [2024-12-06 15:53:31.494054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:25.817 15:53:31 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:25.817 [2024-12-06 15:53:31.704073] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.817 null0 00:36:25.817 [2024-12-06 15:53:31.736118] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:25.817 [2024-12-06 15:53:31.736449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.817 15:53:31 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.817 15:53:31 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:25.818 [2024-12-06 15:53:31.764183] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:25.818 request: 00:36:25.818 { 00:36:25.818 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.818 "secure_channel": false, 00:36:25.818 "listen_address": { 00:36:25.818 "trtype": "tcp", 00:36:25.818 "traddr": "127.0.0.1", 00:36:25.818 "trsvcid": "4420" 00:36:25.818 }, 00:36:25.818 "method": "nvmf_subsystem_add_listener", 00:36:25.818 "req_id": 1 00:36:25.818 } 00:36:25.818 Got JSON-RPC error response 00:36:25.818 response: 00:36:25.818 { 00:36:25.818 "code": -32602, 00:36:25.818 "message": "Invalid parameters" 00:36:25.818 } 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:25.818 15:53:31 keyring_file -- keyring/file.sh@47 -- # bperfpid=3291248 00:36:25.818 15:53:31 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:25.818 15:53:31 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3291248 /var/tmp/bperf.sock 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3291248 ']' 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:25.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.818 15:53:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.076 [2024-12-06 15:53:31.815383] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:36:26.076 [2024-12-06 15:53:31.815424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291248 ] 00:36:26.076 [2024-12-06 15:53:31.888174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.076 [2024-12-06 15:53:31.929952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.076 15:53:32 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.076 15:53:32 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:26.076 15:53:32 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:26.076 15:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:26.335 15:53:32 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d0UiMXJyjh 00:36:26.335 15:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d0UiMXJyjh 00:36:26.593 15:53:32 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:26.593 15:53:32 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:26.593 15:53:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.593 15:53:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.593 15:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.593 15:53:32 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dsvhNAjlUP == \/\t\m\p\/\t\m\p\.\d\s\v\h\N\A\j\l\U\P ]] 00:36:26.593 15:53:32 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:26.593 15:53:32 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:26.593 15:53:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.593 15:53:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:26.593 15:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.851 15:53:32 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.d0UiMXJyjh == \/\t\m\p\/\t\m\p\.\d\0\U\i\M\X\J\y\j\h ]] 00:36:26.851 15:53:32 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:26.851 15:53:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.851 15:53:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:26.851 15:53:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.851 15:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.851 15:53:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.110 15:53:32 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:27.110 15:53:32 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:27.110 15:53:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:27.110 15:53:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.110 15:53:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.110 15:53:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:27.110 15:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.368 15:53:33 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:27.368 15:53:33 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.368 15:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.368 [2024-12-06 15:53:33.345701] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:27.626 nvme0n1 00:36:27.626 15:53:33 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:27.626 15:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:27.626 15:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.626 15:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.626 15:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.626 15:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.884 15:53:33 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:27.884 15:53:33 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:27.884 15:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:27.884 15:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.884 15:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.884 15:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:27.884 15:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.884 15:53:33 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:27.884 15:53:33 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:28.142 Running I/O for 1 seconds... 00:36:29.076 19512.00 IOPS, 76.22 MiB/s 00:36:29.076 Latency(us) 00:36:29.076 [2024-12-06T14:53:35.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:29.076 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:29.076 nvme0n1 : 1.00 19558.95 76.40 0.00 0.00 6532.69 3588.88 10860.25 00:36:29.076 [2024-12-06T14:53:35.074Z] =================================================================================================================== 00:36:29.076 [2024-12-06T14:53:35.074Z] Total : 19558.95 76.40 0.00 0.00 6532.69 3588.88 10860.25 00:36:29.076 { 00:36:29.076 "results": [ 00:36:29.076 { 00:36:29.076 "job": "nvme0n1", 00:36:29.076 "core_mask": "0x2", 00:36:29.076 "workload": "randrw", 00:36:29.076 "percentage": 50, 00:36:29.076 "status": "finished", 00:36:29.076 "queue_depth": 128, 00:36:29.076 "io_size": 4096, 00:36:29.076 "runtime": 1.004144, 00:36:29.076 "iops": 19558.947720645643, 00:36:29.076 "mibps": 76.40213953377204, 00:36:29.076 "io_failed": 0, 00:36:29.076 "io_timeout": 0, 00:36:29.076 "avg_latency_us": 6532.692563669867, 00:36:29.076 "min_latency_us": 3588.8761904761905, 00:36:29.076 "max_latency_us": 10860.251428571428 00:36:29.076 } 00:36:29.076 ], 00:36:29.076 "core_count": 1 00:36:29.076 } 00:36:29.076 15:53:34 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:29.076 15:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:29.333 15:53:35 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:29.333 15:53:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:29.333 15:53:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.333 15:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.333 15:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.333 15:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.333 15:53:35 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:29.590 15:53:35 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:29.590 15:53:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:29.590 15:53:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.590 15:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.590 15:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:29.590 15:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.590 15:53:35 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:29.590 15:53:35 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:29.590 15:53:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:29.590 15:53:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:29.590 15:53:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:29.590 15:53:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:29.590 15:53:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:29.590 15:53:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:29.590 15:53:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:29.590 15:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:29.848 [2024-12-06 15:53:35.707983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:29.848 [2024-12-06 15:53:35.708071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641e30 (107): Transport endpoint is not connected 00:36:29.848 [2024-12-06 15:53:35.709065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641e30 (9): Bad file descriptor 00:36:29.848 [2024-12-06 15:53:35.710067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:29.848 [2024-12-06 15:53:35.710077] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:29.848 [2024-12-06 15:53:35.710087] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:29.848 [2024-12-06 15:53:35.710100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:29.848 request: 00:36:29.848 { 00:36:29.848 "name": "nvme0", 00:36:29.848 "trtype": "tcp", 00:36:29.848 "traddr": "127.0.0.1", 00:36:29.848 "adrfam": "ipv4", 00:36:29.848 "trsvcid": "4420", 00:36:29.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.848 "prchk_reftag": false, 00:36:29.848 "prchk_guard": false, 00:36:29.848 "hdgst": false, 00:36:29.848 "ddgst": false, 00:36:29.848 "psk": "key1", 00:36:29.848 "allow_unrecognized_csi": false, 00:36:29.848 "method": "bdev_nvme_attach_controller", 00:36:29.848 "req_id": 1 00:36:29.848 } 00:36:29.848 Got JSON-RPC error response 00:36:29.848 response: 00:36:29.848 { 00:36:29.848 "code": -5, 00:36:29.848 "message": "Input/output error" 00:36:29.848 } 00:36:29.848 15:53:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:29.848 15:53:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:29.848 15:53:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:29.848 15:53:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:29.848 15:53:35 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:29.848 15:53:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:29.848 15:53:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.848 15:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.848 15:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:29.848 15:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.105 15:53:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:30.105 15:53:35 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:30.105 15:53:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:30.105 15:53:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:30.105 15:53:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:30.105 15:53:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:30.105 15:53:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.363 15:53:36 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:30.363 15:53:36 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:30.363 15:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:30.363 15:53:36 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:30.363 15:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:30.620 15:53:36 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:30.620 15:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.620 15:53:36 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:30.878 15:53:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:30.878 15:53:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.dsvhNAjlUP 00:36:30.878 15:53:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:30.878 15:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:30.878 [2024-12-06 15:53:36.847055] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dsvhNAjlUP': 0100660 00:36:30.878 [2024-12-06 15:53:36.847087] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:30.878 request: 00:36:30.878 { 00:36:30.878 "name": "key0", 00:36:30.878 "path": "/tmp/tmp.dsvhNAjlUP", 00:36:30.878 "method": "keyring_file_add_key", 00:36:30.878 "req_id": 1 00:36:30.878 } 00:36:30.878 Got JSON-RPC error response 00:36:30.878 response: 00:36:30.878 { 00:36:30.878 "code": -1, 00:36:30.878 "message": "Operation not permitted" 00:36:30.878 } 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:30.878 15:53:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:30.878 15:53:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.dsvhNAjlUP 00:36:30.878 15:53:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:30.878 15:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dsvhNAjlUP 00:36:31.136 15:53:37 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.dsvhNAjlUP 00:36:31.136 15:53:37 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:31.136 15:53:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:31.136 15:53:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.136 15:53:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:31.136 15:53:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.136 15:53:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.394 15:53:37 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:31.394 15:53:37 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.394 15:53:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:31.394 15:53:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.394 15:53:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:31.394 15:53:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:31.394 15:53:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:31.394 15:53:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:31.394 15:53:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.394 15:53:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:31.652 [2024-12-06 15:53:37.444642] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dsvhNAjlUP': No such file or directory 00:36:31.652 [2024-12-06 15:53:37.444666] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:31.652 [2024-12-06 15:53:37.444682] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:31.652 [2024-12-06 15:53:37.444689] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:31.652 [2024-12-06 15:53:37.444697] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:31.652 [2024-12-06 15:53:37.444703] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:31.652 request: 00:36:31.652 { 00:36:31.652 "name": "nvme0", 00:36:31.652 "trtype": "tcp", 00:36:31.652 "traddr": "127.0.0.1", 00:36:31.652 "adrfam": "ipv4", 00:36:31.652 "trsvcid": "4420", 00:36:31.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:31.652 "prchk_reftag": false, 00:36:31.652 "prchk_guard": false, 00:36:31.652 "hdgst": false, 00:36:31.652 "ddgst": false, 00:36:31.652 "psk": "key0", 00:36:31.652 "allow_unrecognized_csi": false, 00:36:31.652 "method": "bdev_nvme_attach_controller", 00:36:31.652 "req_id": 1 00:36:31.652 } 00:36:31.652 Got JSON-RPC error response 00:36:31.652 response: 00:36:31.652 { 00:36:31.652 "code": -19, 00:36:31.652 "message": "No such device" 00:36:31.652 } 00:36:31.652 15:53:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:31.652 15:53:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:31.652 15:53:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:31.652 15:53:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:31.652 15:53:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:31.652 15:53:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:31.910 15:53:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pSkgT8j445 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:31.910 15:53:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:31.910 15:53:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:31.910 15:53:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:31.910 15:53:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:31.910 15:53:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:31.910 15:53:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pSkgT8j445 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pSkgT8j445 00:36:31.910 15:53:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.pSkgT8j445 00:36:31.910 15:53:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pSkgT8j445 00:36:31.910 15:53:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pSkgT8j445 00:36:32.168 15:53:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:32.168 15:53:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:32.426 nvme0n1 00:36:32.426 15:53:38 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:32.426 15:53:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:32.426 15:53:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:32.427 15:53:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:32.427 15:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.427 15:53:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:32.427 15:53:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:32.427 15:53:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:32.427 15:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:32.686 15:53:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:32.686 15:53:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:32.686 15:53:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:32.686 15:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.686 15:53:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:32.944 15:53:38 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:32.944 15:53:38 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:32.944 15:53:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:32.944 15:53:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:32.944 15:53:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:32.944 15:53:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:32.944 15:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.202 15:53:38 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:33.202 15:53:38 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:33.202 15:53:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:33.202 15:53:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:33.202 15:53:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:33.202 15:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.460 15:53:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:33.460 15:53:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pSkgT8j445 00:36:33.460 15:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pSkgT8j445 00:36:33.736 15:53:39 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.d0UiMXJyjh 00:36:33.736 15:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.d0UiMXJyjh 00:36:34.048 15:53:39 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.048 15:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.048 nvme0n1 00:36:34.048 15:53:39 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:34.048 15:53:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:34.309 15:53:40 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:34.310 "subsystems": [ 00:36:34.310 { 00:36:34.310 "subsystem": "keyring", 00:36:34.310 "config": [ 00:36:34.310 { 00:36:34.310 "method": "keyring_file_add_key", 00:36:34.310 "params": { 00:36:34.310 "name": "key0", 00:36:34.310 "path": "/tmp/tmp.pSkgT8j445" 00:36:34.310 } 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "method": "keyring_file_add_key", 00:36:34.310 "params": { 00:36:34.310 "name": "key1", 00:36:34.310 "path": "/tmp/tmp.d0UiMXJyjh" 00:36:34.310 } 00:36:34.310 } 00:36:34.310 ] 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "subsystem": "iobuf", 00:36:34.310 "config": [ 00:36:34.310 { 00:36:34.310 "method": "iobuf_set_options", 00:36:34.310 "params": { 00:36:34.310 "small_pool_count": 8192, 00:36:34.310 "large_pool_count": 1024, 00:36:34.310 "small_bufsize": 8192, 00:36:34.310 "large_bufsize": 135168, 00:36:34.310 "enable_numa": false 00:36:34.310 } 00:36:34.310 } 00:36:34.310 ] 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "subsystem": "sock", 00:36:34.310 "config": [ 00:36:34.310 { 00:36:34.310 "method": "sock_set_default_impl", 00:36:34.310 "params": { 00:36:34.310 "impl_name": "posix" 00:36:34.310 } 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "method": "sock_impl_set_options", 00:36:34.310 "params": { 00:36:34.310 "impl_name": "ssl", 00:36:34.310 "recv_buf_size": 4096, 00:36:34.310 "send_buf_size": 4096, 00:36:34.310 "enable_recv_pipe": true, 00:36:34.310 "enable_quickack": false, 00:36:34.310 "enable_placement_id": 0, 00:36:34.310 "enable_zerocopy_send_server": true, 00:36:34.310 "enable_zerocopy_send_client": false, 00:36:34.310 "zerocopy_threshold": 0, 00:36:34.310 "tls_version": 0, 00:36:34.310 "enable_ktls": false 00:36:34.310 } 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "method": "sock_impl_set_options", 00:36:34.310 "params": { 00:36:34.310 "impl_name": "posix", 00:36:34.310 "recv_buf_size": 2097152, 00:36:34.310 "send_buf_size": 2097152, 00:36:34.310 "enable_recv_pipe": true, 00:36:34.310 "enable_quickack": false, 00:36:34.310 "enable_placement_id": 0, 00:36:34.310 "enable_zerocopy_send_server": true, 00:36:34.310 "enable_zerocopy_send_client": false, 00:36:34.310 "zerocopy_threshold": 0, 00:36:34.310 "tls_version": 0, 00:36:34.310 "enable_ktls": false 00:36:34.310 } 00:36:34.310 } 00:36:34.310 ] 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "subsystem": "vmd", 00:36:34.310 "config": [] 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "subsystem": "accel", 00:36:34.310 "config": [ 00:36:34.310 { 00:36:34.310 "method": "accel_set_options", 00:36:34.310 "params": { 00:36:34.310 "small_cache_size": 128, 00:36:34.310 "large_cache_size": 16, 00:36:34.310 "task_count": 2048, 00:36:34.310 "sequence_count": 2048, 00:36:34.310 "buf_count": 2048 00:36:34.310 } 00:36:34.310 } 00:36:34.310 ] 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "subsystem": "bdev", 00:36:34.310 "config": [ 00:36:34.310 { 00:36:34.310 "method": "bdev_set_options", 00:36:34.310 "params": { 00:36:34.310 "bdev_io_pool_size": 65535, 00:36:34.310 "bdev_io_cache_size": 256, 00:36:34.310 "bdev_auto_examine": true, 00:36:34.310 "iobuf_small_cache_size": 128, 00:36:34.310 "iobuf_large_cache_size": 16 00:36:34.310 } 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "method": "bdev_raid_set_options", 00:36:34.310 "params": { 00:36:34.310 "process_window_size_kb": 1024, 00:36:34.310 "process_max_bandwidth_mb_sec": 0 00:36:34.310 } 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "method": "bdev_iscsi_set_options", 00:36:34.310 "params": { 00:36:34.310 "timeout_sec": 30 00:36:34.310 } 00:36:34.310 }, 00:36:34.310 { 00:36:34.310 "method": "bdev_nvme_set_options", 00:36:34.310 "params": { 00:36:34.310 "action_on_timeout": "none", 00:36:34.310 "timeout_us": 0, 00:36:34.310 "timeout_admin_us": 0, 00:36:34.310 "keep_alive_timeout_ms": 10000, 00:36:34.310 "arbitration_burst": 0, 00:36:34.310 "low_priority_weight": 0, 00:36:34.310 "medium_priority_weight": 0, 00:36:34.310 "high_priority_weight": 0, 00:36:34.310 "nvme_adminq_poll_period_us": 10000, 00:36:34.310 "nvme_ioq_poll_period_us": 0, 00:36:34.310 "io_queue_requests": 512, 00:36:34.310 "delay_cmd_submit": true, 00:36:34.310 "transport_retry_count": 4, 00:36:34.310 "bdev_retry_count": 3, 00:36:34.310 "transport_ack_timeout": 0, 00:36:34.310 "ctrlr_loss_timeout_sec": 0, 00:36:34.310 "reconnect_delay_sec": 0, 00:36:34.310 "fast_io_fail_timeout_sec": 0, 00:36:34.310 "disable_auto_failback": false, 00:36:34.310 "generate_uuids": false, 00:36:34.310 "transport_tos": 0, 00:36:34.310 "nvme_error_stat": false, 00:36:34.310 "rdma_srq_size": 0, 00:36:34.310 "io_path_stat": false, 00:36:34.311 "allow_accel_sequence": false, 00:36:34.311 "rdma_max_cq_size": 0, 00:36:34.311 "rdma_cm_event_timeout_ms": 0, 00:36:34.311 "dhchap_digests": [ 00:36:34.311 "sha256", 00:36:34.311 "sha384", 00:36:34.311 "sha512" 00:36:34.311 ], 00:36:34.311 "dhchap_dhgroups": [ 00:36:34.311 "null", 00:36:34.311 "ffdhe2048", 00:36:34.311 "ffdhe3072", 00:36:34.311 "ffdhe4096", 00:36:34.311 "ffdhe6144", 00:36:34.311 "ffdhe8192" 00:36:34.311 ] 00:36:34.311 } 00:36:34.311 }, 00:36:34.311 { 00:36:34.311 "method": "bdev_nvme_attach_controller", 00:36:34.311 "params": { 00:36:34.311 "name": "nvme0", 00:36:34.311 "trtype": "TCP", 00:36:34.311 "adrfam": "IPv4", 00:36:34.311 "traddr": "127.0.0.1", 00:36:34.311 "trsvcid": "4420", 00:36:34.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.311 "prchk_reftag": false, 00:36:34.311 "prchk_guard": false, 00:36:34.311 "ctrlr_loss_timeout_sec": 0, 00:36:34.311 "reconnect_delay_sec": 0, 00:36:34.311 "fast_io_fail_timeout_sec": 0, 00:36:34.311 "psk": "key0", 00:36:34.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:34.311 "hdgst": false, 00:36:34.311 "ddgst": false, 00:36:34.311 "multipath": "multipath" 00:36:34.311 } 00:36:34.311 }, 00:36:34.311 { 00:36:34.311 "method": "bdev_nvme_set_hotplug", 00:36:34.311 "params": { 00:36:34.311 "period_us": 100000, 00:36:34.311 "enable": false 00:36:34.311 } 00:36:34.311 }, 00:36:34.311 { 00:36:34.311 "method": "bdev_wait_for_examine" 00:36:34.311 } 00:36:34.311 ] 00:36:34.311 }, 00:36:34.311 { 00:36:34.311 "subsystem": "nbd", 00:36:34.311 "config": [] 00:36:34.311 } 00:36:34.311 ] 00:36:34.311 }' 00:36:34.311 15:53:40 keyring_file -- keyring/file.sh@115 -- # killprocess 3291248 00:36:34.311 15:53:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3291248 ']' 00:36:34.311 15:53:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3291248 00:36:34.311 15:53:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:34.311 15:53:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:34.311 15:53:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3291248 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3291248' 00:36:34.570 killing process with pid 3291248 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@973 -- # kill 3291248 00:36:34.570 Received shutdown signal, test time was about 1.000000 seconds 00:36:34.570 00:36:34.570 Latency(us) 00:36:34.570 [2024-12-06T14:53:40.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.570 [2024-12-06T14:53:40.568Z] =================================================================================================================== 00:36:34.570 [2024-12-06T14:53:40.568Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@978 -- # wait 3291248 00:36:34.570 15:53:40 keyring_file -- keyring/file.sh@118 -- # bperfpid=3292774 00:36:34.570 15:53:40 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3292774 /var/tmp/bperf.sock 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3292774 ']' 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.570 15:53:40 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.570 15:53:40 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:34.570 "subsystems": [ 00:36:34.570 { 00:36:34.570 "subsystem": "keyring", 00:36:34.570 "config": [ 00:36:34.570 { 00:36:34.570 "method": "keyring_file_add_key", 00:36:34.570 "params": { 00:36:34.570 "name": "key0", 00:36:34.570 "path": "/tmp/tmp.pSkgT8j445" 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "keyring_file_add_key", 00:36:34.570 "params": { 00:36:34.570 "name": "key1", 00:36:34.570 "path": "/tmp/tmp.d0UiMXJyjh" 00:36:34.570 } 00:36:34.570 } 00:36:34.570 ] 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "subsystem": "iobuf", 00:36:34.570 "config": [ 00:36:34.570 { 00:36:34.570 "method": "iobuf_set_options", 00:36:34.570 "params": { 00:36:34.570 "small_pool_count": 8192, 00:36:34.570 "large_pool_count": 1024, 00:36:34.570 "small_bufsize": 8192, 00:36:34.570 "large_bufsize": 135168, 00:36:34.570 "enable_numa": false 00:36:34.570 } 00:36:34.570 } 00:36:34.570 ] 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "subsystem": "sock", 00:36:34.570 "config": [ 00:36:34.570 { 00:36:34.570 "method": "sock_set_default_impl", 00:36:34.570 "params": { 00:36:34.570 "impl_name": "posix" 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "sock_impl_set_options", 00:36:34.570 "params": { 00:36:34.570 "impl_name": "ssl", 00:36:34.570 "recv_buf_size": 4096, 00:36:34.570 "send_buf_size": 4096, 00:36:34.570 "enable_recv_pipe": true, 00:36:34.570 "enable_quickack": false, 00:36:34.570 "enable_placement_id": 0, 00:36:34.570 "enable_zerocopy_send_server": true, 00:36:34.570 "enable_zerocopy_send_client": false, 00:36:34.570 "zerocopy_threshold": 0, 00:36:34.570 "tls_version": 0, 00:36:34.570 "enable_ktls": false 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "sock_impl_set_options", 00:36:34.570 "params": { 00:36:34.570 "impl_name": "posix", 00:36:34.570 "recv_buf_size": 2097152, 00:36:34.570 "send_buf_size": 2097152, 00:36:34.570 "enable_recv_pipe": true, 00:36:34.570 "enable_quickack": false, 00:36:34.570 "enable_placement_id": 0, 00:36:34.570 "enable_zerocopy_send_server": true, 00:36:34.570 "enable_zerocopy_send_client": false, 00:36:34.570 "zerocopy_threshold": 0, 00:36:34.570 "tls_version": 0, 00:36:34.570 "enable_ktls": false 00:36:34.570 } 00:36:34.570 } 00:36:34.570 ] 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "subsystem": "vmd", 00:36:34.570 "config": [] 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "subsystem": "accel", 00:36:34.570 "config": [ 00:36:34.570 { 00:36:34.570 "method": "accel_set_options", 00:36:34.570 "params": { 00:36:34.570 "small_cache_size": 128, 00:36:34.570 "large_cache_size": 16, 00:36:34.570 "task_count": 2048, 00:36:34.570 "sequence_count": 2048, 00:36:34.570 "buf_count": 2048 00:36:34.570 } 00:36:34.570 } 00:36:34.570 ] 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "subsystem": "bdev", 00:36:34.570 "config": [ 00:36:34.570 { 00:36:34.570 "method": "bdev_set_options", 00:36:34.570 "params": { 00:36:34.570 "bdev_io_pool_size": 65535, 00:36:34.570 "bdev_io_cache_size": 256, 00:36:34.570 "bdev_auto_examine": true, 00:36:34.570 "iobuf_small_cache_size": 128, 00:36:34.570 "iobuf_large_cache_size": 16 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "bdev_raid_set_options", 00:36:34.570 "params": { 00:36:34.570 "process_window_size_kb": 1024, 00:36:34.570 "process_max_bandwidth_mb_sec": 0 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "bdev_iscsi_set_options", 00:36:34.570 "params": { 00:36:34.570 "timeout_sec": 30 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "bdev_nvme_set_options", 00:36:34.570 "params": { 00:36:34.570 "action_on_timeout": "none", 00:36:34.570 "timeout_us": 0, 00:36:34.570 "timeout_admin_us": 0, 00:36:34.570 "keep_alive_timeout_ms": 10000, 00:36:34.570 "arbitration_burst": 0, 00:36:34.570 "low_priority_weight": 0, 00:36:34.570 "medium_priority_weight": 0, 00:36:34.570 "high_priority_weight": 0, 00:36:34.570 "nvme_adminq_poll_period_us": 10000, 00:36:34.570 "nvme_ioq_poll_period_us": 0, 00:36:34.570 "io_queue_requests": 512, 00:36:34.570 "delay_cmd_submit": true, 00:36:34.570 "transport_retry_count": 4, 00:36:34.570 "bdev_retry_count": 3, 00:36:34.570 "transport_ack_timeout": 0, 00:36:34.570 "ctrlr_loss_timeout_sec": 0, 00:36:34.570 "reconnect_delay_sec": 0, 00:36:34.570 "fast_io_fail_timeout_sec": 0, 00:36:34.570 "disable_auto_failback": false, 00:36:34.570 "generate_uuids": false, 00:36:34.570 "transport_tos": 0, 00:36:34.570 "nvme_error_stat": false, 00:36:34.570 "rdma_srq_size": 0, 00:36:34.570 "io_path_stat": false, 00:36:34.570 "allow_accel_sequence": false, 00:36:34.570 "rdma_max_cq_size": 0, 00:36:34.570 "rdma_cm_event_timeout_ms": 0, 00:36:34.570 "dhchap_digests": [ 00:36:34.570 "sha256", 00:36:34.570 "sha384", 00:36:34.570 "sha512" 00:36:34.570 ], 00:36:34.570 "dhchap_dhgroups": [ 00:36:34.570 "null", 00:36:34.570 "ffdhe2048", 00:36:34.570 "ffdhe3072", 00:36:34.570 "ffdhe4096", 00:36:34.570 "ffdhe6144", 00:36:34.570 "ffdhe8192" 00:36:34.570 ] 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "bdev_nvme_attach_controller", 00:36:34.570 "params": { 00:36:34.570 "name": "nvme0", 00:36:34.570 "trtype": "TCP", 00:36:34.570 "adrfam": "IPv4", 00:36:34.570 "traddr": "127.0.0.1", 00:36:34.570 "trsvcid": "4420", 00:36:34.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:34.570 "prchk_reftag": false, 00:36:34.570 "prchk_guard": false, 00:36:34.570 "ctrlr_loss_timeout_sec": 0, 00:36:34.570 "reconnect_delay_sec": 0, 00:36:34.570 "fast_io_fail_timeout_sec": 0, 00:36:34.570 "psk": "key0", 00:36:34.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:34.570 "hdgst": false, 00:36:34.570 "ddgst": false, 00:36:34.570 "multipath": "multipath" 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "bdev_nvme_set_hotplug", 00:36:34.570 "params": { 00:36:34.570 "period_us": 100000, 00:36:34.570 "enable": false 00:36:34.570 } 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "method": "bdev_wait_for_examine" 00:36:34.570 } 00:36:34.570 ] 00:36:34.570 }, 00:36:34.570 { 00:36:34.570 "subsystem": "nbd", 00:36:34.570 "config": [] 00:36:34.570 } 00:36:34.570 ] 00:36:34.570 }' 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.570 15:53:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:34.570 [2024-12-06 15:53:40.515755] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:36:34.570 [2024-12-06 15:53:40.515805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292774 ] 00:36:34.828 [2024-12-06 15:53:40.589706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.828 [2024-12-06 15:53:40.631550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.828 [2024-12-06 15:53:40.792965] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:35.392 15:53:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.392 15:53:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:35.392 15:53:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:35.392 15:53:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:35.392 15:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.649 15:53:41 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:35.649 15:53:41 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:35.649 15:53:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.649 15:53:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.649 15:53:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.649 15:53:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.649 15:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.906 15:53:41 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:35.906 15:53:41 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:35.906 15:53:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:35.906 15:53:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.906 15:53:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.906 15:53:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.906 15:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.163 15:53:41 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:36.163 15:53:41 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:36.163 15:53:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:36.163 15:53:41 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:36.163 15:53:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:36.163 15:53:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:36.163 15:53:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.pSkgT8j445 /tmp/tmp.d0UiMXJyjh 00:36:36.163 15:53:42 keyring_file -- keyring/file.sh@20 -- # killprocess 3292774 00:36:36.163 15:53:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3292774 ']' 00:36:36.163 15:53:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3292774 00:36:36.163 15:53:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:36.163 15:53:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.163 15:53:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3292774 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3292774' 00:36:36.422 killing process with pid 3292774 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@973 -- # kill 3292774 00:36:36.422 Received shutdown signal, test time was about 1.000000 seconds 00:36:36.422 00:36:36.422 Latency(us) 00:36:36.422 [2024-12-06T14:53:42.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.422 [2024-12-06T14:53:42.420Z] =================================================================================================================== 00:36:36.422 [2024-12-06T14:53:42.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@978 -- # wait 3292774 00:36:36.422 15:53:42 keyring_file -- keyring/file.sh@21 -- # killprocess 3291244 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3291244 ']' 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3291244 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3291244 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3291244' 00:36:36.422 killing process with pid 3291244 00:36:36.422 15:53:42 keyring_file -- common/autotest_common.sh@973 -- # kill 3291244 00:36:36.680 15:53:42 keyring_file -- common/autotest_common.sh@978 -- # wait 3291244 00:36:36.940 00:36:36.940 real 0m11.709s 00:36:36.940 user 0m28.999s 00:36:36.940 sys 0m2.732s 00:36:36.940 15:53:42 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.940 15:53:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:36.940 ************************************ 00:36:36.940 END TEST keyring_file 00:36:36.940 ************************************ 00:36:36.940 15:53:42 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:36.940 15:53:42 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:36.940 15:53:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:36.940 15:53:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.940 15:53:42 -- common/autotest_common.sh@10 -- # set +x 00:36:36.940 ************************************ 00:36:36.940 START TEST keyring_linux 00:36:36.940 ************************************ 00:36:36.940 15:53:42 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:36.940 Joined session keyring: 913281486 00:36:36.940 * Looking for test storage... 00:36:36.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:36.940 15:53:42 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:36.940 15:53:42 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:36.940 15:53:42 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:37.199 15:53:42 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:37.199 15:53:42 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:37.200 15:53:42 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:37.200 15:53:42 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:37.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.200 --rc genhtml_branch_coverage=1 00:36:37.200 --rc genhtml_function_coverage=1 00:36:37.200 --rc genhtml_legend=1 00:36:37.200 --rc geninfo_all_blocks=1 00:36:37.200 --rc geninfo_unexecuted_blocks=1 00:36:37.200 00:36:37.200 ' 00:36:37.200 15:53:42 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:37.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.200 --rc genhtml_branch_coverage=1 00:36:37.200 --rc genhtml_function_coverage=1 00:36:37.200 --rc genhtml_legend=1 00:36:37.200 --rc geninfo_all_blocks=1 00:36:37.200 --rc geninfo_unexecuted_blocks=1 00:36:37.200 00:36:37.200 ' 00:36:37.200 15:53:42 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:37.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.200 --rc genhtml_branch_coverage=1 00:36:37.200 --rc genhtml_function_coverage=1 00:36:37.200 --rc genhtml_legend=1 00:36:37.200 --rc geninfo_all_blocks=1 00:36:37.200 --rc geninfo_unexecuted_blocks=1 00:36:37.200 00:36:37.200 ' 00:36:37.200 15:53:42 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:37.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.200 --rc genhtml_branch_coverage=1 00:36:37.200 --rc genhtml_function_coverage=1 00:36:37.200 --rc genhtml_legend=1 00:36:37.200 --rc geninfo_all_blocks=1 00:36:37.200 --rc geninfo_unexecuted_blocks=1 00:36:37.200 00:36:37.200 ' 00:36:37.200 15:53:42 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:37.200 15:53:42 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:37.200 15:53:42 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:37.200 15:53:42 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.200 15:53:42 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.200 15:53:42 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.200 15:53:42 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:37.200 15:53:42 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:37.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:37.200 15:53:42 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:37.200 /tmp/:spdk-test:key0 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:37.200 15:53:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:37.200 15:53:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:37.200 /tmp/:spdk-test:key1 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3293326 00:36:37.200 15:53:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3293326 00:36:37.200 15:53:43 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3293326 ']' 00:36:37.200 15:53:43 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.200 15:53:43 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:37.200 15:53:43 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.200 15:53:43 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:37.200 15:53:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:37.200 [2024-12-06 15:53:43.126963] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:36:37.200 [2024-12-06 15:53:43.127009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293326 ] 00:36:37.459 [2024-12-06 15:53:43.200652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.459 [2024-12-06 15:53:43.242776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.717 15:53:43 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.717 15:53:43 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:37.717 15:53:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:37.717 15:53:43 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.717 15:53:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:37.717 [2024-12-06 15:53:43.469200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.717 null0 00:36:37.717 [2024-12-06 15:53:43.501250] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:37.717 [2024-12-06 15:53:43.501612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:37.717 15:53:43 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.717 15:53:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:37.717 68923100 00:36:37.717 15:53:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:37.717 540668111 00:36:37.717 15:53:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3293332 00:36:37.717 15:53:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3293332 /var/tmp/bperf.sock 00:36:37.718 15:53:43 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:37.718 15:53:43 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3293332 ']' 00:36:37.718 15:53:43 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:37.718 15:53:43 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:37.718 15:53:43 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:37.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:37.718 15:53:43 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:37.718 15:53:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:37.718 [2024-12-06 15:53:43.572463] Starting SPDK v25.01-pre git sha1 562857cff / DPDK 24.03.0 initialization... 00:36:37.718 [2024-12-06 15:53:43.572506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293332 ] 00:36:37.718 [2024-12-06 15:53:43.645289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.718 [2024-12-06 15:53:43.687165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.976 15:53:43 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.976 15:53:43 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:37.976 15:53:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:37.976 15:53:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:37.976 15:53:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:37.976 15:53:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:38.234 15:53:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:38.234 15:53:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:38.491 [2024-12-06 15:53:44.321185] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:38.491 nvme0n1 00:36:38.491 15:53:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:38.491 15:53:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:38.491 15:53:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:38.491 15:53:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:38.491 15:53:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:38.491 15:53:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.748 15:53:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:38.748 15:53:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:38.748 15:53:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:38.748 15:53:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:38.748 15:53:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.748 15:53:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:38.748 15:53:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.007 15:53:44 keyring_linux -- keyring/linux.sh@25 -- # sn=68923100 00:36:39.007 15:53:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:39.007 15:53:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:39.007 15:53:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 68923100 == \6\8\9\2\3\1\0\0 ]] 00:36:39.007 15:53:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 68923100 00:36:39.007 15:53:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:39.007 15:53:44 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:39.007 Running I/O for 1 seconds... 00:36:40.380 21850.00 IOPS, 85.35 MiB/s 00:36:40.380 Latency(us) 00:36:40.380 [2024-12-06T14:53:46.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.381 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:40.381 nvme0n1 : 1.01 21848.63 85.35 0.00 0.00 5839.17 4993.22 13668.94 00:36:40.381 [2024-12-06T14:53:46.379Z] =================================================================================================================== 00:36:40.381 [2024-12-06T14:53:46.379Z] Total : 21848.63 85.35 0.00 0.00 5839.17 4993.22 13668.94 00:36:40.381 { 00:36:40.381 "results": [ 00:36:40.381 { 00:36:40.381 "job": "nvme0n1", 00:36:40.381 "core_mask": "0x2", 00:36:40.381 "workload": "randread", 00:36:40.381 "status": "finished", 00:36:40.381 "queue_depth": 128, 00:36:40.381 "io_size": 4096, 00:36:40.381 "runtime": 1.005921, 00:36:40.381 "iops": 21848.63423668459, 00:36:40.381 "mibps": 85.34622748704918, 00:36:40.381 "io_failed": 0, 00:36:40.381 "io_timeout": 0, 00:36:40.381 "avg_latency_us": 5839.174419441087, 00:36:40.381 "min_latency_us": 4993.219047619048, 00:36:40.381 "max_latency_us": 13668.937142857143 00:36:40.381 } 00:36:40.381 ], 00:36:40.381 "core_count": 1 00:36:40.381 } 00:36:40.381 15:53:45 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:40.381 15:53:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:40.381 15:53:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:40.381 15:53:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.381 15:53:46 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:40.381 15:53:46 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.381 15:53:46 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:40.381 15:53:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:40.381 15:53:46 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:40.381 15:53:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:40.381 15:53:46 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.381 15:53:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:40.638 [2024-12-06 15:53:46.532195] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:40.638 [2024-12-06 15:53:46.532948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fffbc0 (107): Transport endpoint is not connected 00:36:40.638 [2024-12-06 15:53:46.533943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fffbc0 (9): Bad file descriptor 00:36:40.638 [2024-12-06 15:53:46.534945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:40.638 [2024-12-06 15:53:46.534955] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:40.638 [2024-12-06 15:53:46.534962] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:40.638 [2024-12-06 15:53:46.534970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:40.638 request: 00:36:40.638 { 00:36:40.638 "name": "nvme0", 00:36:40.638 "trtype": "tcp", 00:36:40.638 "traddr": "127.0.0.1", 00:36:40.638 "adrfam": "ipv4", 00:36:40.638 "trsvcid": "4420", 00:36:40.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.638 "prchk_reftag": false, 00:36:40.638 "prchk_guard": false, 00:36:40.638 "hdgst": false, 00:36:40.638 "ddgst": false, 00:36:40.638 "psk": ":spdk-test:key1", 00:36:40.638 "allow_unrecognized_csi": false, 00:36:40.638 "method": "bdev_nvme_attach_controller", 00:36:40.638 "req_id": 1 00:36:40.638 } 00:36:40.638 Got JSON-RPC error response 00:36:40.638 response: 00:36:40.638 { 00:36:40.638 "code": -5, 00:36:40.638 "message": "Input/output error" 00:36:40.638 } 00:36:40.638 15:53:46 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:40.638 15:53:46 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:40.638 15:53:46 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:40.638 15:53:46 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@33 -- # sn=68923100 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 68923100 00:36:40.638 1 links removed 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:40.638 15:53:46 keyring_linux -- keyring/linux.sh@33 -- # sn=540668111 00:36:40.639 15:53:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 540668111 00:36:40.639 1 links removed 00:36:40.639 15:53:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3293332 00:36:40.639 15:53:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3293332 ']' 00:36:40.639 15:53:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3293332 00:36:40.639 15:53:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:40.639 15:53:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.639 15:53:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3293332 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3293332' 00:36:40.897 killing process with pid 3293332 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@973 -- # kill 3293332 00:36:40.897 Received shutdown signal, test time was about 1.000000 seconds 00:36:40.897 00:36:40.897 Latency(us) 00:36:40.897 [2024-12-06T14:53:46.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.897 [2024-12-06T14:53:46.895Z] =================================================================================================================== 00:36:40.897 [2024-12-06T14:53:46.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@978 -- # wait 3293332 00:36:40.897 15:53:46 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3293326 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3293326 ']' 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3293326 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3293326 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3293326' 00:36:40.897 killing process with pid 3293326 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@973 -- # kill 3293326 00:36:40.897 15:53:46 keyring_linux -- common/autotest_common.sh@978 -- # wait 3293326 00:36:41.156 00:36:41.156 real 0m4.358s 00:36:41.156 user 0m8.261s 00:36:41.156 sys 0m1.411s 00:36:41.156 15:53:47 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.156 15:53:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:41.156 ************************************ 00:36:41.156 END TEST keyring_linux 00:36:41.156 ************************************ 00:36:41.415 15:53:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:41.415 15:53:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:41.415 15:53:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:41.415 15:53:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:41.415 15:53:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:41.415 15:53:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:41.415 15:53:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:41.415 15:53:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.415 15:53:47 -- common/autotest_common.sh@10 -- # set +x 00:36:41.415 15:53:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:41.415 15:53:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:41.415 15:53:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:41.415 15:53:47 -- common/autotest_common.sh@10 -- # set +x 00:36:46.687 INFO: APP EXITING 00:36:46.687 INFO: killing all VMs 00:36:46.687 INFO: killing vhost app 00:36:46.687 INFO: EXIT DONE 00:36:49.224 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:49.224 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:49.224 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:52.514 Cleaning 00:36:52.514 Removing: /var/run/dpdk/spdk0/config 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:52.514 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:52.514 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:52.514 Removing: /var/run/dpdk/spdk1/config 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:52.514 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:52.514 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:52.514 Removing: /var/run/dpdk/spdk2/config 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:52.514 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:52.514 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:52.514 Removing: /var/run/dpdk/spdk3/config 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:52.514 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:52.514 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:52.514 Removing: /var/run/dpdk/spdk4/config 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:52.514 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:52.514 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:52.514 Removing: /dev/shm/bdev_svc_trace.1 00:36:52.514 Removing: /dev/shm/nvmf_trace.0 00:36:52.514 Removing: /dev/shm/spdk_tgt_trace.pid2813458 00:36:52.514 Removing: /var/run/dpdk/spdk0 00:36:52.514 Removing: /var/run/dpdk/spdk1 00:36:52.514 Removing: /var/run/dpdk/spdk2 00:36:52.514 Removing: /var/run/dpdk/spdk3 00:36:52.514 Removing: /var/run/dpdk/spdk4 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2811089 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2812158 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2813458 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2814025 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2814948 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2815062 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2816039 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2816187 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2816410 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2818136 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2819638 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2819926 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2820219 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2820518 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2820648 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2820862 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2821111 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2821400 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2822238 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2825348 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2825621 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2825876 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2825885 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2826376 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2826379 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2826874 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2826882 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2827186 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2827365 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2827520 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2827634 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2828114 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2828283 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2828618 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2832464 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2836849 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2847562 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2848150 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2852535 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2852860 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2857285 00:36:52.514 Removing: /var/run/dpdk/spdk_pid2863167 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2865775 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2876205 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2885125 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2886967 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2887891 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2905497 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2909571 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2954848 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2960248 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2966016 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2972518 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2972594 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2973435 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2974348 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2975266 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2975734 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2975810 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2976119 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2976194 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2976199 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2977114 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2978029 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2978909 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2979409 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2979418 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2979651 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2980848 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2981859 00:36:52.515 Removing: /var/run/dpdk/spdk_pid2990478 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3018877 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3023993 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3025722 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3027426 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3027579 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3027813 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3027825 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3028331 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3030170 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3031001 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3031433 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3033706 00:36:52.515 Removing: /var/run/dpdk/spdk_pid3034097 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3034745 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3039019 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3044623 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3044625 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3044627 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3048424 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3056772 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3060690 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3067314 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3068632 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3070182 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3071500 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3076215 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3080543 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3084562 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3091937 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3091948 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3096651 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3096890 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3097115 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3097372 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3097495 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3102068 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3102641 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3107199 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3109740 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3115249 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3120976 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3129755 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3136929 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3136978 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3155563 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3156034 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3156634 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3157191 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3157927 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3158402 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3158890 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3159568 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3163866 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3164340 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3170420 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3170571 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3175944 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3180176 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3189963 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3190493 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3194789 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3195161 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3199432 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3205066 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3207775 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3218100 00:36:52.774 Removing: /var/run/dpdk/spdk_pid3226856 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3228600 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3229520 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3245653 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3249473 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3252239 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3260655 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3260756 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3265874 00:36:52.775 Removing: /var/run/dpdk/spdk_pid3267840 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3269801 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3270854 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3272824 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3274100 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3282852 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3283314 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3283935 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3286269 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3286746 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3287300 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3291244 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3291248 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3292774 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3293326 00:36:53.034 Removing: /var/run/dpdk/spdk_pid3293332 00:36:53.034 Clean 00:36:53.034 15:53:58 -- common/autotest_common.sh@1453 -- # return 0 00:36:53.034 15:53:58 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:53.034 15:53:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:53.034 15:53:58 -- common/autotest_common.sh@10 -- # set +x 00:36:53.034 15:53:58 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:53.034 15:53:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:53.034 15:53:58 -- common/autotest_common.sh@10 -- # set +x 00:36:53.034 15:53:58 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:53.034 15:53:58 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:53.034 15:53:58 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:53.034 15:53:58 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:53.034 15:53:58 -- spdk/autotest.sh@398 -- # hostname 00:36:53.034 15:53:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:53.293 geninfo: WARNING: invalid characters removed from testname! 00:37:15.226 15:54:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.603 15:54:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:18.508 15:54:24 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:20.408 15:54:26 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.311 15:54:28 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:24.218 15:54:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:26.122 15:54:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:26.123 15:54:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:26.123 15:54:31 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:26.123 15:54:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:26.123 15:54:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:26.123 15:54:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:26.123 + [[ -n 2733864 ]] 00:37:26.123 + sudo kill 2733864 00:37:26.133 [Pipeline] } 00:37:26.149 [Pipeline] // stage 00:37:26.154 [Pipeline] } 00:37:26.169 [Pipeline] // timeout 00:37:26.174 [Pipeline] } 00:37:26.189 [Pipeline] // catchError 00:37:26.194 [Pipeline] } 00:37:26.210 [Pipeline] // wrap 00:37:26.216 [Pipeline] } 00:37:26.230 [Pipeline] // catchError 00:37:26.239 [Pipeline] stage 00:37:26.242 [Pipeline] { (Epilogue) 00:37:26.256 [Pipeline] catchError 00:37:26.257 [Pipeline] { 00:37:26.271 [Pipeline] echo 00:37:26.272 Cleanup processes 00:37:26.278 [Pipeline] sh 00:37:26.564 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:26.564 3304504 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:26.578 [Pipeline] sh 00:37:26.862 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:26.862 ++ grep -v 'sudo pgrep' 00:37:26.862 ++ awk '{print $1}' 00:37:26.862 + sudo kill -9 00:37:26.862 + true 00:37:26.875 [Pipeline] sh 00:37:27.165 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:39.430 [Pipeline] sh 00:37:39.712 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:39.712 Artifacts sizes are good 00:37:39.726 [Pipeline] archiveArtifacts 00:37:39.733 Archiving artifacts 00:37:39.875 [Pipeline] sh 00:37:40.212 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:40.228 [Pipeline] cleanWs 00:37:40.239 [WS-CLEANUP] Deleting project workspace... 00:37:40.239 [WS-CLEANUP] Deferred wipeout is used... 00:37:40.246 [WS-CLEANUP] done 00:37:40.248 [Pipeline] } 00:37:40.267 [Pipeline] // catchError 00:37:40.279 [Pipeline] sh 00:37:40.563 + logger -p user.info -t JENKINS-CI 00:37:40.572 [Pipeline] } 00:37:40.587 [Pipeline] // stage 00:37:40.594 [Pipeline] } 00:37:40.611 [Pipeline] // node 00:37:40.618 [Pipeline] End of Pipeline 00:37:40.669 Finished: SUCCESS